I have built a web scraper using Selenium 3.141.0, and python 3.6. I have slowed down my scraper, using a random fake user agent, and using rotating proxies.
An error message pops up random and terminates my script and I have tries handling it with exceptions but nothing is working. Any help would be greatly appreciated.
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=47573):
Max retries exceeded with url: /session/ffb99e9d6aad1b754fd1bb1c8ca91f98/cookie
(Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff0b8f4acf8>:
Failed to establish a new connection: [Errno 111] Connection refused',))
I am trying to prevent this from shutting down and keep running so it can perform the next search.