I am scraping a web page (twitter) using "webdriver.PhantomJS".
I'd like to get all the scrolling and data (tweets), but now I only know how to fetch the page.
for _ in range(500):
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(0.2)
With past data instead of real time data (For example, from May 1 to May 2)
The number of data is fixed.
However, I can not figure out how many tweets
I have and it is a problem to set the number of pages.
How do I write code that does infinite scrolling?
I've seen a lot of answers through search, but I have a hard time applying it to my code, so I ask this question.
#My entire code is this.
#py3
import requests
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.PhantomJS('C:\phantomjs-2.1.1-windows/bin/phantomjs')
url = u'https://twitter.com/search?f=tweets&vertical=default&q=%EB%B0%B0%EA%B3%A0%ED%8C%8C%20since%3A2017-07-19%20until%3A2017-07-20&l=ko&src=typd&lang=ko'
browser.get(url)
time.sleep(1)
body = browser.find_element_by_tag_name('body')
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
for _ in range(500):
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(0.2)
tweets=browser.find_elements_by_class_name('tweet-text')
wfile = open("ttttttt.txt", mode='w', encoding='utf8')
data={}
i = 1
for i, tweet in enumerate(tweets):
data['text'] = tweet.text
print(i, ":", data)
wfile.write(str(data) +'\n')
i += 1
wfile.close()