I just started using selenium yesterday to help scrape some data and I'm having a difficult time wrapping my head around the selector engine. I know lxml, BeautifulSoup, jQuery and Sizzle have similar engines. But what I'm trying to do is:
- Wait 10 seconds for page to completely load
- Make sure there are the presence of ten or more span.eN elements (two load on intitial page load and more after)
- Then start processing the data with beautifulsoup
I am struggling with the selenium conditions of either finding the nth element or locating the specific text that only exists in an nth element. I keep getting errors (timeout, NoSuchElement, etc)
url = "http://someajaxiandomain.com/that-injects-html-after-pageload.aspx"
wd = webdriver.Chrome()
wd.implicitly_wait(10)
wd.get(url)
# what I've tried
# .find_element_by_xpath("//span[@class='eN'][10]"))
# .until(EC.text_to_be_present_in_element(By.CSS_SELECTOR, "css=span[class='eN']:contains('foo')"))