0

I am new to programming but am getting familiar with web-scraping. I wish to write a code which clicks on each link on the page. In my attempted code, I have made a sample of just two links to click on to speed things up. However, my current code is only yielding the first link to be clicked on but not the second.

from selenium import webdriver
import csv

driver = webdriver.Firefox()
driver.get("https://www.betexplorer.com/baseball/usa/mlb-2018/results/? 
stage=KvfZSOKj&month=all")
matches = driver.find_elements_by_xpath('//td[@class="h-text-left"]')
m_samp = matches[0:1]
for i in m_samp:
    i.click()
    driver.get("https://www.betexplorer.com/baseball/usa/mlb-2018/results/? 
    stage=KvfZSOKj&month=all")

Ideally, I would like it to click the first link, then go back to the previous page, then click the second link, then go back to the previous page.

Any help is appreciated.

Michael
  • 343
  • 2
  • 7
  • Maybe related https://stackoverflow.com/questions/54590058/loop-through-links-and-scrape-data-from-resulting-pages-using-selenium-python – Kamal Feb 26 '19 at 09:38

1 Answers1

0

First take the all the clickable urls into one list then iterate list

like list_urls= ["url1","url2"]

for i in list_urls:
    driver.get(i)

save the all urls other wise going back and clicking will not work , because the you have only one instance of driver not the multiple

venkatesh
  • 141
  • 3
  • 13