1

I have a code that returns the title of a list of websites. Sometimes, a website takes an absurd amount of time to load, so when that happens, a timeout error prompts. I'd like to make it so that when such an error happens, the program continues running as opposed to stopping completely.

The code is:

from pyvirtualdisplay import Display
from time import sleep
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
display = Display(visible=0, size(800,600))
display.start()
driver = webdriver.Firefox(executable_path='/usr/local/lib/geckodriver/geckodriver')
driver.set_page_load_timeout(60)
driver.get('https://google.com')
print(driver.title)

The following code is what makes a timeout occur when after 60 seconds the page hasn't loaded:

driver.set_page_load_timeout(60)

When the 60 seconds pass, the program stops and prompts the timeout error. I want it to continue with the next url instead.

undetected Selenium
  • 183,867
  • 41
  • 278
  • 352
WeekSky
  • 55
  • 1
  • 9

2 Answers2

1

To iterate over a List of urls even incase of page_load_timeout error, you can use the following solution:

  • Code Block:

    from selenium import webdriver
    from selenium.common.exceptions import TimeoutException
    
    urls = ["https://www.booking.com/hotel/in/the-taj-mahal-palace-tower.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaGyIAQGYATG4AQbIAQzYAQHoAQH4AQKSAgF5qAID;sid=338ad58d8e83c71e6aa78c67a2996616;dest_id=-2092174;dest_type=city;dist=0;group_adults=2;hip_dst=1;hpos=1;room1=A%2CA;sb_price_type=total;srfid=ccd41231d2f37b82d695970f081412152a59586aX1;srpvid=c71751e539ea01ce;type=total;ucfs=1&#hotelTmpl", "https://www.google.com/"]
    driver = webdriver.Chrome(executable_path=r'C:\WebDrivers\chromedriver.exe')
    driver.set_page_load_timeout(2)
    for url in urls:
        try :
            driver.get(url)
            print("URL successfully Accessed ... Proceeding with other tasks !!!")
            # perform other operations within the url
        except TimeoutException as e:
            print("Page load Timeout Occured ... moving to next item !!!")
    driver.quit()
    
  • Console Output:

    Page load Timeout Occured ... moving to next item !!!
    Page load Timeout Occured ... moving to next item !!!
    
  • Note:

    • set_page_load_timeout(2) is used to reproduce page load timeout for demonstration purpose only.
    • The List of urls are for demonstration purpose only.

You can find a detailed discussion on page load timeout in How to set the timeout of 'driver.get' for python selenium 3.8.0?

undetected Selenium
  • 183,867
  • 41
  • 278
  • 352
  • 1
    Thank you. This does help my issue. The list of websites I need to be taken from a .txt file. Do you know how I could accomplish that? – WeekSky Mar 29 '19 at 22:13
0

You can use try except to handle and pass any error.

from pyvirtualdisplay import Display
from time import sleep
import sys

reload(sys)
sys.setdefaultencoding('utf-8')

from selenium import webdriver
from selenium.webdriver.firefox.options import Options

display = Display(visible=0, size(800,600))

display.start()

driver = webdriver.Firefox(executable_path='/usr/local/lib/geckodriver/geckodriver')


try:
    driver.set_page_load_timeout(60)
except Exception as e:
    print(e)

driver.get('https://google.com')

print(driver.title)