What is a good crawler (spider) to use against HTML and XML documents (local or web-based) and that works well in the Lucene / Solr solution space? Could be Java-based but does not have to be.
7 Answers
In my opinion, this is a pretty significant hole which is keeping down the widespread adoption of Solr. The new DataImportHandler is a good first step to import structured data, but there is not a good document ingestion pipeline for Solr. Nutch does work, but the integration between Nutch crawler and Solr is somewhat clumsy.
I've tried every open-source crawler that I can find, and none of them integrates out-of-the-box with Solr.
Keep an eye on OpenPipeline and Apache Tika.

- 168
- 2
- 8
I've tried nutch, but it was very difficult to integrate with Solr. I would take a look at Heritrix. It has an extensive plugin system to make it easy to integrate with Solr, and it is much much faster at crawling. It makes extensive use of threads to speed up the process.

- 483
- 5
- 9
Also check Apache Droids [http://incubator.apache.org/droids/] -- this hopes not be a simple spider/crawler/worker framework.
It is new and is not yet easy to use off the shelf (it will take some tweeking to get running), but is a good thing to keep your eye on.
Nutch might be your closest match, but it's not too flexible.
If you need something more you will have to pretty much hack your own crawler. It's not as bad as it sounds, every language has web libraries, so you just need to connect some task queue manager with HTTP downloader and HTML parser, it's not really that much work. You can most likely get away with a single box, as crawling is mostly bandwidth-intentive, not CPU-intensive.

- 18,110
- 15
- 57
- 76