2

I want to code a perl application that would crawl some websites and collect images and links from such webpages. Because the most of pages use JavaScript that generate a HTML content, I need to code quasi a client browser with JavaScript support to be able to parse a final HTML code that is generated and/or modified by JavaScript. What are my options?

If possible, please publish some implementation code or link to some example(s).

Ωmega
  • 42,614
  • 34
  • 134
  • 203
  • possible duplicate of [How can I handle Javascript in a Perl web crawler?](http://stackoverflow.com/questions/3769015/how-can-i-handle-javascript-in-a-perl-web-crawler) – Ilmari Karonen Mar 05 '12 at 01:41

5 Answers5

10

There are several options.

Quentin
  • 914,110
  • 126
  • 1,211
  • 1,335
6

Options that spring to mind:

  • You could have Perl use Selenium and have a full-blown browser do the work for you.

  • You can download and compile V8 or another open source JavaScript engine and have Perl call an external program to evaluate the JavaScript.

  • I don't think Perl's LWP module supports JavaScript, but you might want to check that if you haven't done so already.

Trott
  • 66,479
  • 23
  • 173
  • 212
6

WWW::Scripter with the WWW::Scripter::Plugin::JavaScript and WWW::Scripter::Plugin::Ajax plugins seems like the closest you'll get without using an actual browser (the modules WWW::Selenium, Mozilla::Mechanize or Win32::IE::Mechanize use real browsers).

MkV
  • 3,046
  • 22
  • 16
3

Check the complete working example featured in the Scraping pages full of JavaScript. It uses Web::Scraper for HTML processing and Gtk3::WebKit to process dynamic content. However, the later one is quite a PITA to install. If there are not-that-many pages you need to scrape (< 1000), fetching the post-processed DOM content through PhantomJS is an interesting option. I've written the following script for that purpose:

var page = require('webpage').create(),
    system = require('system'),
    fs = require('fs'),
    address, output;

if (system.args.length < 3 || system.args.length > 5) {
    console.log('Usage: phantomjs --load-images=no html.js URL filename');
    phantom.exit(1);
} else {
    address = system.args[1];
    output = system.args[2];
    page.open(address, function (status) {
        if (status !== 'success') {
            console.log('Unable to load the address!');
        } else {
            fs.write(output, page.content, 'w');
        }
        phantom.exit();
    });
}

There's something like that on the CPAN already, it's a module called Wight, but I haven't tested it yet.

creaktive
  • 5,193
  • 2
  • 18
  • 32
-1

WWW::Mechanize::Firefox can use with mozrepl, with all javascript action.

abbypan
  • 158
  • 1
  • 11