I am trying to scrap a web page, but I am having problem when I try to access the element after the click action.
When the loop starts the second time I get "stale element reference
: element is not attached to the page document
".
I understand that Selenium can't found element - checkbox for another fare
and departuretime
div. I tried to put all kind of timeouts, try blocks, refresh the page. But in the end the result is the same - can't find element or get selenium.common.exceptions.TimeoutException
.
Maybe somebody has ideas on what is the problem? I just run out of ideas, how to fix it:(
P.S I am using Python, Selenium and webdriver.chrome.
UPDATE:
I changed webDriver to Firefox geckodriver, deleted "wait.until(EC.title_contains(fare.get_attribute('id')))" at the end of the loop and add "driver.implicitly_wait(10)" in the end. Now it is kind of working. Tried to add implicity_wait to code with chrome driver but it was causing the same exceptions. So can the problem be with the chrome webdriver???
My code example:
fares =driver.find_elements_by_xpath("//td[@class='inputselect
standardlowfare']/div[1]/input[starts-
with(@id,'FlightSelectOutboundStandardLowFare') and @type='radio']")
wait = WebDriverWait(driver, 10))
# elem = wait.until(EC.title_contains((fare.get_attribute('id'))))
# wait.until(EC.refreshed(EC.staleness_of(fare.get_attribute('id'))))
# driver.implicitly_wait(10)
# wait = WebDriverWait(driver, 30)
for fare in fares:
fare.click()
print("clicked " )
time.sleep(5)
# driver.get(driver.current_url)
# wait.until(EC.title_contains(('selectiontable')))
departureTime = driver.find_element_by_xpath("//* [@id='ctl00_MainContent_ipcAvaDay_upnlResSelection']/div[1]/div/table/tbody/tr[4]/td")
print ("Departure time " + departureTime.text)
wait.until(EC.title_contains(fare.get_attribute('id')))