14

Some servers have a robots.txt file in order to stop web crawlers from crawling through their websites. Is there a way to make a web crawler ignore the robots.txt file? I am using Mechanize for python.

Craig Locke
  • 755
  • 4
  • 8
  • 12

2 Answers2

29

The documentation for mechanize has this sample code:

br = mechanize.Browser()
....
# Ignore robots.txt.  Do not do this without thought and consideration.
br.set_handle_robots(False)

That does exactly what you want.

David Heffernan
  • 601,492
  • 42
  • 1,072
  • 1,490
  • I suggest raising your issue on [flagging this question](http://stackoverflow.com/questions/8373398/creating-replacement-tapplication-for-experimentation) on meta yet again. There seems to be different opinions on how suspected copyright violations should be handled, and a definitive answer would help. – NullUserException Dec 05 '11 at 18:33
  • @NullUser will do. I'll try and collect together in one place all the conflicting advice I have had, and see if we can't all come to a common viewpoint! – David Heffernan Dec 05 '11 at 18:51
9

This looks like what you need:

from mechanize import Browser
br = Browser()

# Ignore robots.txt
br.set_handle_robots( False )

but you know what you're doing…

eumiro
  • 207,213
  • 34
  • 299
  • 261