I'm using a Git repo here that I grabbed from someone (didnt want to paste the entire code sorry) that is supposed to grab product URLs and Image URLs from each page. It runs for a while and I can see the count increasing as well as see the URLs being collected. However, after a while I get this timeout error and I'm not familiar enough with selenium or web page scraping to know this error message. Any help would be amazing! I've googled around for a couple days with no luck. Here is the error I'm getting. The code used in is the script 1-url-scraper.py
in the git repo. I'm on a mac using Firefox webdriver. Thanks so much in advance!
Asked
Active
Viewed 128 times
0

Andre
- 360
- 1
- 7
- 19
-
this error, shows up when you have a lowest version of your webdriver you are using, try to update and download the version that satisfied your browser version, if you are using google, you can see your browser version in chrome://settings/help – Bernardo Olisan Aug 06 '20 at 17:28
-
Thanks @bernardo! So update the webdriver? I'm using Firefox. – Andre Aug 06 '20 at 18:29
-
1yes, if you are using firefox here you can see the version https://support.mozilla.org/en-US/kb/find-what-version-firefox-you-are-using – Bernardo Olisan Aug 06 '20 at 19:09