After getting
mechanize._response.httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt
when using Mechanize, added code from Screen scraping: getting around "HTTP Error 403: request disallowed by robots.txt" to ignore robots.txt, but now am receiving this error:
mechanize._response.httperror_seek_wrapper: HTTP Error 403: Forbidden
Is there a way around this error?
(Current code)
br = mechanize.Browser()
br.set_handle_robots(False)