21

I am looking at writing my own, but I am wondering if there are any good web crawlers out there which are written in Ruby.

Short of a full-blown web crawler, any gems that might be helpful in building a web crawler would be useful. I know this part of the question is touched upon in a couple of places, but a list of gems applicable to building a web crawler would be a great resource as well.

Jordan Dea-Mattson
  • 5,791
  • 5
  • 38
  • 53

5 Answers5

74

I used to write spiders, page scrapers and site analyzers for my job, and still write them periodically to scratch some itch I get.

Ruby has some excellent gems to make it easy:

  • Nokogiri is my #1 choice for the HTML parser. I used to use Hpricot, but found some sites that made it explode in flames. I switched to Nokogiri afterwards and have been very happy with it. I regularly use it for parsing HTML, RDF/RSS/Atom and XML. Ox looks interesting too, so that might be another candidate, though I find searching the DOM a lot easier than trying to walk through a big hash, such as what is returned by Ox.

  • OpenURI is good as a simple HTTP client, but it can get in the way when you want to do more complex things or need to have multiple requests firing at once. I'd recommend looking at HTTPClient or Typhoeus with Hydra for modest to heavyweight jobs. Curb is good too, because it uses the cURL library, but the interface isn't as intuitive to me. It's worth looking at though. HTTPclient is also worth looking at, but I lean toward the previously mentioned ones.

    Note: OpenURI has some flaws and vulnerabilities that can affect unsuspecting programmers so it's fallen out of favor somewhat. RestClient is a very worthy successor.

  • You'll need a backing database, and some way to talk to it. This isn't a task for Rails per se, but you could use ActiveRecord, detached from Rails, to talk to the database. I've done that a couple times and it works all right. Instead, I really like Sequel for my ORM. It's very flexible in how it lets you talk to the database, from using straight SQL to using Sequel's ability to programmatically build a query, to modeling the database and using migrations. Once you have the database built, you could use Rails to act as a front-end to the data though.

  • If you are going to navigate sites in any way beyond simply grabbing pages and following links, you'll want to look at Mechanize. It makes it easy to fill out forms and submit pages. As an added bonus, you can grab the content of a page as a Nokogiri HTML document and parse away using Nokogiri's multitude of tricks.

  • For massaging/mangling URLs I really like Addressable::URI. It's more full-featured than the built-in URI module. One thing that URI does that's nice is it has the URI#extract method to scan a string for URLs. If that string happened to be the body of a web page it would be an alternate way of locating links, but its downside is you'll also get links to images, videos, ads, etc., and you'll have to filter those out, probably resulting in more work than if you use a parser and look for <a> tags exclusively. For that matter, Mechanize also has the links method which returns all the links in a page, but you'll still have to filter them to determine whether you want to follow or ignore them.

  • If you think you'll need to deal with Javascript manipulated pages, or pages that get their content dynamically from AJAX, you should look into using one of the WATIR variants. There are flavors for the different browsers on different OSes, such as Firewatir, Safariwatir and Operawatir, so you'll have to figure out what works for you.

  • You do NOT want to rely on keeping your list of URLs to visit, or visited URLs, in memory. Design a database schema and store that information there. Spend some time up front designing the schema, thinking about what things you'll want to know as you collect links on a site. SQLite3, MySQL and Postgres are all excellent choices, depending on how big you think your database needs will be. One of my site analyzers was custom designed to help us recommend SEO changes for a Fortune 50 company. It ran for over three weeks covering about twenty different sites before we had enough data and stopped it. Imagine what would have happened if we had a power-outage and all that data went in the bit-bucket.

After all that you'll want to also make your code be aware of proper spidering etiquette: What are the key considerations when creating a web crawler?

Community
  • 1
  • 1
the Tin Man
  • 158,662
  • 42
  • 215
  • 303
  • Wonderful answer! I think Hpricot is depracted by now, so I would always use Nokogiri instead. – wpp Feb 22 '13 at 09:50
  • I think Hpricot is maintained, or it was the last time I looked, but I still prefer and recommend Nokogiri. [Ox](http://ohler55.github.com/ox/) is interesting too, so it might be worth looking at. – the Tin Man Feb 22 '13 at 17:27
19

I am building wombat, a Ruby DSL to crawl web pages and extract content. Check it out on github https://github.com/felipecsl/wombat

It is still in an early stage but is already functional with basic functionality. More stuff will be added really soon.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
Felipe Lima
  • 10,530
  • 4
  • 41
  • 39
  • This is extremely cool. Thanks for posting it. Will be checking it out. – Jordan Dea-Mattson Feb 06 '12 at 20:33
  • Felipe, wombat looks really interesting! That said, it isn't fair to say a tool does [web crawling](http://en.wikipedia.org/wiki/Web_crawling) if it doesn't crawl links across the web. A better term for extracting information from a web is [web scraping](http://en.wikipedia.org/wiki/Web_scraping) -- which seems to me what wombat does. – David J. Jun 21 '12 at 16:42
  • @DavidJames thanks for the clarification. Indeed I've always been in doubt when choosing the right term for it (scraping or crawling). What you said makes sense. However, I have plans to make it more 'crawler-like' in the future, allowing it to follow links, etc. However, thanks for your feedback! :) – Felipe Lima Jun 23 '12 at 05:54
5

So you want a good Ruby-based web crawler?

Try spider or anemone. Both have solid usage according to RubyGems download counts.

The other answers, so far, are detailed and helpful but they don't have a laser-like focus on the question, which asks for ruby libraries for web crawlers. It would seem that this distinction can get muddled: see my answer to "Crawling vs. Web-Scraping?"

Community
  • 1
  • 1
David J.
  • 31,569
  • 22
  • 122
  • 174
1

Tin Man's comprehensive list is good but partly outdated for me.

Most websites my customers deal with are heavily AJAX/Javascript dependent. I've been using Watir / watir-webdriver / selenium for a few years too, but the overhead of having to load up a hidden web browser on the backend to render that DOM stuff just isn't viable, let alone that all this time they still haven't implemented a useable "browser session reuse" to let new code execution reuse an old browser in memory for this purpose, shooting down tickets that might have worked their way up the API layers eventually. (refering to https://code.google.com/p/selenium/issues/detail?id=18 ) **

https://rubygems.org/gems/phantomjs

is what we're migrating new projects over to now, to let the necessary data get rendered without even any sort of invisible Xvfb memory & CPU heavy web browser.

** Alternative approaches also failed to pan out:

Community
  • 1
  • 1
Marcos
  • 4,796
  • 5
  • 40
  • 64
0

If you don't want to write your own, then use any ordinary web crawler. There are dozens out there.

If you do want to write your own, then write your own. A web crawler isn't exactly a complicated activity, it consists of:

  1. Downloading a website.
  2. Locating URLs in that website, filtered however you dang well please.
  3. For each URL in that website, repeat step 1.

Oh, and this seems to be a duplicate of "Web crawler in ruby".

Community
  • 1
  • 1
Arafangion
  • 11,517
  • 1
  • 40
  • 72
  • Yes, dealing with those are a matter of optimization. :) A number of those issues would be handled by a good http library, and some of those issues would become irrelevant depending on what you want to use the crawler for. Question: If the url is mangled, obfustigated, or whatever, should you crawl it? – Arafangion Feb 12 '11 at 23:59
  • 4
    Ha. Like so many things, it's theoretically easy, but in practice actually quite tricky. We wrote a crawler, and here's a few issues off the top of my head: bad or inavlid URLs, bad/invalid base hrefs, javascript and ajax-loaded content, iframes and nested iframes, gazillions of file types (and how about assets of one file type with the extension of another?), compressed assets, correctly canonicalising URLs, de-duping identical pages with different urls, crawler traps, inconsistent case sensitivity, the list goes on, with millions of edge cases. Every site you crawl you discover something new – Richard H Feb 13 '11 at 00:02
  • 1
    "Every site you crawl you discover something new", especially that an incredible number of people pay no attention to the specs. – the Tin Man Feb 13 '11 at 00:36
  • @Arafangion, "If the url is mangled, obfustigated, or whatever, should you crawl it?" Nobody can answer that for you, it depends on your needs. – the Tin Man Feb 13 '11 at 00:37
  • @the Tin Man: That was why I asked. :) – Arafangion Feb 13 '11 at 00:38
  • `A web crawler isn't exactly a complicated activity...`. Yes, it's just three easy things... right. Each one is not easily accomplished unless you know what to do, especially #1 and #2. – the Tin Man Jun 13 '13 at 20:40