0

Let's say i want to scrape this page: https://twitter.com/nfl

from bs4 import BeautifulSoup
import requests

page = 'https://twitter.com/nfl'
r = requests.get(page)
soup = BeautifulSoup(r.text)
print soup 

The more i scroll down on the page, the more results show up. But this above request only gives me the initial load. How do i get all the information of the page as if I were to manually scroll down?

alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
jason
  • 3,811
  • 18
  • 92
  • 147
  • Hi, I am in similar situation as yours, my recommendation is to learn a little bit of js ( that is what I am doing right now). You can actually call the js file with appropriate parameters to make it directly output the data to a file (json most likely). But since I am learning it now, I can't provide a better solution. Correct me if I am wrong. The case I am working on is http://stocktwits.com/symbol/aapl . I hope it will you a bit. – LegitMe Mar 28 '16 at 08:06

4 Answers4

4

First parse the data-max-id="451819302057164799" value from the html source.

Then using the id 451819302057164799 construct an url like below:

https://twitter.com/i/profiles/show/nfl/timeline?include_available_features=1&include_entities=1&max_id=451819302057164799

Now get the html source of the link and parse using simplejson or any other json library.

Remember, the next page load(when you scroll down) is available from the value "max_id":"451369755908530175" in that json.

Sabuj Hassan
  • 38,281
  • 14
  • 75
  • 85
  • `https://twitter.com/i/profiles/show/nfl/timeline?include_available_features=1&include_entities=1&max_id=451819302057164799` is this a generic solution for all twitter pages? how do you know how to construct that specific url? – jason Apr 04 '14 at 13:01
  • @jason_cant_code can be. I didn't check. May be the `nfl` is the key for different pages. – Sabuj Hassan Apr 04 '14 at 13:02
  • i don't think this works. I'm getting a much shorter page than expected. – jason Apr 04 '14 at 13:19
  • Use requests session to ensure that you keep alive your session in every GET – Curro Apr 04 '14 at 21:46
1

If the content is dynamically added with javascript, your best chance is to use selenium to control a headless browser like phantomjs, use the selenium webdriver to simulate the scrolldown, add a wait for the new content to load, and only then extract the html and feed it to your BS parser.

Javier
  • 722
  • 4
  • 14
  • 1
    See http://stackoverflow.com/questions/14583560/selenium-retrieve-data-that-loads-while-scrolling-down for scrolling down – dorvak Apr 04 '14 at 12:35
  • I'll try this if there is no better solution – jason Apr 04 '14 at 14:30
  • Yes, Selenium is always an option.... but IMO is not the best one. I preffer to figure out the http traffic between browser and server and simulate it using requests, urllib or whatever.... much faster than Selenium. – Curro Apr 04 '14 at 21:51
1

Better solution is to use Twitter API.

There are several python twitter API clients, for example:

alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195
0

For dynamically generated content, the data is usually in json format. So we have to inspect the page, go to network option and find the link which will give us the data/response on the fly. For example : The page - https://techolution.app.param.ai/jobs/ the data is generated dynamically. For that I got this link - https://techolution.app.param.ai/api/career/get_job/?query=&locations=&category=&job_types=

After that the web scraping becomes a bit easy and I have done that in python using Anaconda Navigator. Here is the github link for that - https://github.com/piperaprince01/Webscraping_python/blob/master/WebScraping.ipynb

If you can make any changes to make it better then feel free to do so. Thank You.