I have a script that looks for certain sources from the page. Sometimes, however, there are no sources at all and the loop stops working. I have no idea how to solve this problem because I have already specified where the script should look for the elements.
servers = browser.find_element_by_xpath('//*[@id="links-
container"]/div[2]').find_elements_by_tag_name('tr')
When there are no sources, the page does not have an html object such as when these elements exist. This could be added for an exception.
no_elements = browser.find_element_by_xpath('/html/body/div[2]/div[2]/div[1]/div[3]/div[1]')
But I don't really know where to add it. Below is the scrape of my code.
srcs = []
servers = browser.find_element_by_xpath('//*[@id="links-
container"]/div[2]').find_elements_by_tag_name('tr')
for server in servers:
try:
txt = server.get_attribute('textContent').lower()
link_type = txt.split()[2]
if txt.find('test') >= 0:
button = server.find_element_by_tag_name('a')
browser.execute_script('arguments[0].click();', button)
tm=0
while(True):
try:
browser.switch_to.window(browser.window_handles[1])
srcs.append(browser.find_element_by_xpath('/html/body/section/div/iframe').get_attribute('src') + "@" + link_type)
browser.close()
browser.switch_to.window(browser.window_handles[0])
break
except:
if tm==5:break
tm+=1
except Exception as e:
print (str)(e)
continue
So basically the script is looking for specific links and clicks on it, open in new tab, gets src and returns for next one.
I will be grateful for your help.