1

Some pages do not return raw data (like json or xml or html) on ajax. Instead they use some framework like dojo where ajax calls return js files which somehow populate the html nodes.

I am wondering if there is a non Selenium strategy to scrapy data from these pages.

yayu
  • 7,758
  • 17
  • 54
  • 86
  • as far as i know you need a browser. if you do not want selenium try phantomjs or similar. Have you tried http://jeanphix.me/Ghost.py/? – gosom Dec 12 '14 at 13:31
  • @gosom Phantomjs is pretty cool and works well in my current implementation locally. the downside is that it is very slow. I also had problems deploying the code to heroku, so I'm wondering if there's something better. – yayu Dec 12 '14 at 13:35
  • maybe try that: http://www.codeproject.com/Articles/528293/Scraping-JavaScript-webpages-with-Webkit – gosom Dec 12 '14 at 13:39

1 Answers1

1

Alternatively to the selenium or webkit based approach, you can parse the javascript with a javascript code parser, like slimit. It definitely raises the complexity and reliability of the web-scraping since you go down to a bare hardcore metal with it - think about it as a "white box" approach as opposed to selenium based high-level "black box" one.

Here's the answer I've given for an exact same topic/problem you are asking about:

It involves the use of slimit to grab an object from the javascript code, loading it to a python data structure via json module and parsing the HTML inside with BeautifulSoup parser.

Community
  • 1
  • 1
alecxe
  • 462,703
  • 120
  • 1,088
  • 1,195