0

I am trying to wait for a page to fully load with Selenium, and tried to use code from other answers here: https://stackoverflow.com/a/30385843/8165689 3rd method in this answer using Selenium's 'staleness_of' property, and originally at: http://www.obeythetestinggoat.com/how-to-get-selenium-to-wait-for-page-load-after-a-click.html

However, I think I have some problem with the Python yield keyword specifically in this code. Based on the above, I have the method:

@contextmanager
def wait_for_page_load(driver, timeout = 30):
    old_page = driver.find_element_by_tag_name('html')
    yield WebDriverWait(driver, timeout).until(staleness_of(old_page))

This doesn't get called by Python, breakpoint shows it is skipped. I also have same problem with apparent original code:

@contextmanager
def wait_for_page_load(driver, timeout = 30):
    old_page = driver.find_element_by_tag_name('html') # up to here with decorator, the function is called OK, with 'yield' it is NOT called
    yield
    WebDriverWait(driver, timeout).until(staleness_of(old_page))

But if I delete the yield statement onwards this function does at least get called:

@contextmanager
def wait_for_page_load(driver, timeout = 30):
    old_page = driver.find_element_by_tag_name('html')

Anyone know how I should write the yield statement? I'm not experienced with yield, but it looks like Python has to yield something, so perhaps original code which seems to have yield in line of its own has a problem?

Will Croxford
  • 457
  • 2
  • 7
  • 21
  • 1
    You don't need to do this with Selenium. Selenium already blocks on page load. Having said that... there may be some background processes that run after the page is loaded that continue to load/change the content of the page. That's a whole other question but what you are attempting here will not solve that problem. – JeffC Feb 14 '19 at 21:41
  • Thanks @JeffC, I'm testing something with Facebook search results list actually, got Selenium to load more pages than with just viewing the browser source, but doesn't get all results on page in the Selenium content. In fact, however short the results list is, I think it is cutting off before the end somehow (with JS or AJAX presumably?). This didn't get called because of syntax problem, this uses yield anyway and I was calling with normal function call, not generator call like next(wait_for_page_load(driver)), but that doesn't work either because decorator makes it no longer iterable. – Will Croxford Feb 15 '19 at 12:35
  • 1
    It may be a lazy loader situation. You may need to scroll down, etc. to get other results to load. – JeffC Feb 15 '19 at 14:03

1 Answers1

0

I think you have might missed out the expected conditions here.Please try that code see if this helps.

from selenium.webdriver.support import expected_conditions as EC
def wait_for_page_load(driver, timeout = 30):
    old_page = driver.find_element_by_tag_name('html') 
    yield WebDriverWait(driver, timeout).until(EC.staleness_of(old_page))
KunduK
  • 32,888
  • 5
  • 17
  • 41
  • Thanks kindly @Kajal, I already had `from selenium.webdriver.support.expected_conditions import staleness_of` so it isn't this though, I've made a mess of how I call this anyway, details in my reply @JeffC comment. – Will Croxford Feb 15 '19 at 12:28