0

I'm looking for a PHP web-crawler to gather all the links to for a large site and tell me if the links are broken.

So far I've tried modifying an example on here myself. My question about the codeI've also tried grabbing phpDig but the site is down. any suggestions would be great on how I should proceed would be great.

EDIT

The problem isn't the grabbing of the links the issue of the scale I'm not sure if the script I modified is sufficient enough to grab what possibly be thousands of URL's as I tried setting the depth for the search link to 4 and the crawler timed out through the browser. Someone else mentioned something about killing processes as to not overload the server, could someone please elaborate on the issue.

Community
  • 1
  • 1
dbomb101
  • 413
  • 1
  • 8
  • 21
  • *(related)* [Best Methods to parse HTML](http://stackoverflow.com/questions/3577641/best-methods-to-parse-html/3577662#3577662) – Gordon Apr 12 '11 at 08:57
  • 1
    there are a dozen online tools to do this, do you really need to build your own? –  Apr 12 '11 at 08:57
  • http://stackoverflow.com/search?q=crawler+php – Gordon Apr 12 '11 at 08:58

1 Answers1

0

Not a ready-to-use solution, but Simple HTML Dom parser is one of my favourite dom parsers. It let's you use CSS selectors for finding nodes over the document, so you can easily find <a href="">'s. With these hyperlinks's you can build your own crawler and check if the pages are still available.

You can find it here.

Richard Tuin
  • 4,484
  • 2
  • 19
  • 18
  • 2
    Suggested third party alternatives to [SimpleHtmlDom](http://simplehtmldom.sourceforge.net/) that actually use [DOM](http://php.net/manual/en/book.dom.php) instead of String Parsing: [phpQuery](http://code.google.com/p/phpquery/), [Zend_Dom](http://framework.zend.com/manual/en/zend.dom.html), [QueryPath](http://querypath.org/) and [FluentDom](http://www.fluentdom.org). – Gordon Apr 12 '11 at 08:58