I wrote a code in Python for Web Scraping and fetching HTML table but its throwing an Attribute Error : 'WebDriver' object has no attribute 'find_elements_by_xpath'
FULL ERROR DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe') Traceback (most recent call last): File "C:\Users\rajat.kapoor\PycharmProjects\RajatProject\FirstPythonFile.py", line 6, in scheme = driver.find_elements_by_xpath('//tbody/tr/td[0]')
Given Below is the Code
from selenium import webdriver
import pandas as pd
driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe')
driver.get('https://www.mutualfundssahihai.com/en/schemeperformance')
driver.maximize_window()
scheme = driver.find_elements_by_xpath('//tbody/tr/td[0]')
benchmark = driver.find_elements_by_xpath('//tbody/tr/td[1]')
result=[]
for i in range(len(riskometer)):
temporary_data = {'Scheme':scheme.text,
'Benchmark':benchmark.text}
result.append(temporary_data)
df_data = pd.DataFrame(result)
df_data.to_excel('scrapingresult.xlsx',index=False)
I tried writing the code for Web Scraping using Selenium (fetch HTML Table) but its throwing an Attribute Error :'WebDriver' object has no attribute 'find_elements_by_xpath'
FULL ERROR
DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe') Traceback (most recent call last): File "C:\Users\rajat.kapoor\PycharmProjects\RajatProject\FirstPythonFile.py", line 6, in scheme = driver.find_elements_by_xpath('//tbody/tr/td[0]')
Below is the code for the same
from selenium import webdriver
import pandas as pd
driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe')
driver.get('https://www.mutualfundssahihai.com/en/schemeperformance')
driver.maximize_window()
scheme = driver.find_elements_by_xpath('//tbody/tr/td[0]')
benchmark = driver.find_elements_by_xpath('//tbody/tr/td[1]')
result=[]
for i in range(len(riskometer)):
temporary_data = {'Scheme':scheme.text,
'Benchmark':benchmark.text}
result.append(temporary_data)
df_data = pd.DataFrame(result)
df_data.to_excel('scrapingresult.xlsx',index=False)