18

I'm trying to scrape and submit information to websites that heavily rely on Javascript to do most of its actions. The website won't even work when i disable Javascript in my browser.

I've searched for some solutions on Google and SO and there was someone who suggested i should reverse engineer the Javascript, but i have no idea how to do that.

So far i've been using Mechanize and it works on websites that don't require Javascript.

Is there any way to access websites that use Javascript by using urllib2 or something similar? I'm also willing to learn Javascript, if that's what it takes.

user216171
  • 1,686
  • 3
  • 15
  • 22
  • Out of curiosity, what is the purpose of this exercise? Do the websites mind you submitting data to their forms automatically? – Tom Gullen Jul 29 '10 at 13:22
  • This is basically not possible. – Katriel Jul 29 '10 at 13:24
  • 3
    Tom, i don't think they mind. Or at least i hope they don't. Katrielalex, i seriously doubt that. – user216171 Jul 29 '10 at 13:37
  • Hehe, I think I'm behind-the-times a bit. The link below looks pretty good; `crowbar` actually *renders the entire page with gecko* for you, all behind the scenes! – Katriel Jul 29 '10 at 13:52
  • Technically, scraping javascript output should definitely be possible because your browser does it! There are just a lot of weirdnesses that come from this... what happens if there is some sort of asynchronous request, or something waits a second before outputting? – Donald Miner Jul 29 '10 at 13:56
  • 1
    http://stackoverflow.com/questions/857515/screen-scraping-from-a-web-page-with-a-lot-of-javascript – Jason Orendorff Jul 29 '10 at 14:06
  • 1
    @orangeoctopus You can't do nothing about Ajax, but it's ok for all the rest of the javascript using `PyQt4.QtWebKit`. – Guillaume Lebourgeois Jul 29 '10 at 15:48

6 Answers6

11

I wrote a small tutorial on this subject, this might help:

http://koaning.io.s3-website.eu-west-2.amazonaws.com/dynamic-scraping-with-python.html

Basically what you do is you have the selenium library pretend that it is a firefox browser, the browser will wait until all javascript has loaded before it continues passing you the html string. Once you have this string, you can then parse it with beautifulsoup.

cantdutchthis
  • 31,949
  • 17
  • 74
  • 114
9

You should look into using Ghost, a Python library that wraps the PyQt4 + WebKit hack.

This makes g the WebKit client:

import ghost
g = ghost.Ghost()

You can grab a page with g.open(url) and then g.content will evaluate to the document in its current state.

Ghost has other cool features, like injecting JS and some form filling methods, and you can pass the resulting document to BeautifulSoup and so on: soup = bs4.BeautifulSoup(g.content).

So far, Ghost is the only thing I've found that makes this kind of thing easy in Python. The only limitation I've come across is that you can't easily create more than one instance of the client object, ghost.Ghost, but you could work around that.

hbaderts
  • 14,136
  • 4
  • 41
  • 48
Carl Smith
  • 3,025
  • 24
  • 36
8

I've had exactly the same problem. It is not simple at all, but I finally found a great solution, using PyQt4.QtWebKit.

You will find the explanations on this webpage : http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/

I've tested it, I currently use it, and that's great !

Its great advantage is that it can run on a server, only using X, without a graphic environment.

Guillaume Lebourgeois
  • 3,796
  • 1
  • 20
  • 23
7

Check out crowbar. I haven't had any experience with it, but I was curious about the answer to your question so I started googling around. I'd like to know if this works out for you.

http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

Donald Miner
  • 38,889
  • 8
  • 95
  • 118
7

Maybe you could use Selenium Webdriver, which has python bindings I believe. I think it's mainly used as a tool for testing websites, but I guess it should be usable for scraping too.

Steven
  • 28,002
  • 5
  • 61
  • 51
  • 1+ Selenium is a great tools for scraping. (if you don't mind how heavy it is). The only down side is that you'll see the browser doing what you want. – Diego Castro Dec 08 '10 at 19:58
  • It is possible to run [Selenium headless](http://stackoverflow.com/questions/7568899/does-selenium-support-headless-browser-testing), without any display. – Steven Almeroth Jul 16 '12 at 19:30
  • @stav Though there seems not official support to run selenium headless, you can use xvfb, which is like /dev/null and absorbs whole of the display. This first result on google should help http://www.alittlemadness.com/2008/03/05/running-selenium-headless/ – pranavk Aug 05 '12 at 11:02
7

I would actually suggest using Selenium. Its mainly designed for testing Web-Applications from a "user perspective however it is basically a "FireFox" driver. I've actually used it for this purpose ... although I was scraping an dynamic AJAX webpage. As long as the Javascript form has a recognizable "Anchor Text" that Selenium can "click" everything should sort itself out.

Hope that helps

JudoWill
  • 4,741
  • 2
  • 36
  • 48