0

I'm running a python selenium script in a lambda function on AWS.

With xpath I try to find the pagination button to move to the next page.

I use the following code:

button_next = driver.find_element_by_xpath('//a[@data-at="pagination-next"]')

The code works, because I'm able to print it and event extract the URL and also print it:

url = button_next.get_attribute("href")

But in the log I get the following error but after I was able to print it:

Message: no such element: Unable to locate element: {"method":"xpath","selector":"//a[@data-at="pagination-next"]"}
  (Session info: headless chrome=65.0.3325.181)
  (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.14.255-276-224.499.amzn2.x86_64 x86_64)
: NoSuchElementException
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 69, in lambda_handler
    button_next = driver.find_element_by_xpath('//a[@data-at="pagination-next"]')
  File "/opt/python/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 368, in find_element_by_xpath
    return self.find_element(by=By.XPATH, value=xpath)
  File "/opt/python/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 858, in find_element
    'value': value})['value']
  File "/opt/python/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 311, in execute
    self.error_handler.check_response(response)
  File "/opt/python/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 237, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//a[@data-at="pagination-next"]"}
  (Session info: headless chrome=65.0.3325.181)
  (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.14.255-276-224.499.amzn2.x86_64 x86_64)

What is the issue here? Why do I get this error, although the xpath just worked before.

undetected Selenium
  • 183,867
  • 41
  • 278
  • 352
Max
  • 33
  • 5

1 Answers1

2

The desired element is a dynamic element so to click on the clickable element you need to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following locator strategies:

  • Using CSS_SELECTOR:

    WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[data-at='pagination-next']"))).click()
    
  • Using XPATH:

    WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[@data-at='pagination-next']"))).click()
    
  • Note: You have to add the following imports :

    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.common.by import By
    from selenium.webdriver.support import expected_conditions as EC
    

Additionally, ensure that:

undetected Selenium
  • 183,867
  • 41
  • 278
  • 352
  • 1
    Thank you so much for your fast and detailed reply! I'm not sure if it helps in my case, because I don't click the URL, instead I use it in a while-loop with: "driver.get(url)" Can you advise what to do in my case? I know I need to updated Selenium e.g., but it is a real hassle to do so in AWS lambda. (I'm a newbie to the AWS world :-) ) – Max Jun 17 '22 at 19:35
  • It might be due to the last page when it no longer exists or is disabled. You have to check if exists using try except. – Arundeep Chohan Jun 17 '22 at 20:30
  • @Max In that case you may like to raise a new question with all the relevant details and your code trials. – undetected Selenium Jun 17 '22 at 21:16
  • @undetectedSelenium I exported screenshot to find out what is going on and run into this issue: https://stackoverflow.com/questions/72669535/python-selenium-scraper-pagination-to-next-page-shows-error-scrap-protection-f ArundeepChohan no it is not the last page. Thanks for the hint. – Max Jun 18 '22 at 13:12