I´m trying to extract some scraping of a 221x7 table in selenium. Since my first approach takes approx. 3sec, i was wondering, what is the fastest way and best practice at the same moment.
1st: 3.6sec
table_content = driver_lsx_watchlist.find_elements(By.XPATH, '''//*[@id="page_content"]/div/div/div/div/module/div/table/tbody''')
table_content = table_content[0].text
table_content = table_content.splitlines()
for i in range(0, len(table_content)):
print(f'{i} {table_content[i]}')
2nd: about 200sec!!!
for row in range(1, 222):
row_text = ''
for column in range (1,7):
xpath = '''//*[@id="page_content"]/div/div/div/div/module/div/table/tbody/tr[''' + str(row) + ''']/td[''' + str(column) + ''']/div'''
row_text = row_text + driver_lsx_watchlist.find_elements(By.XPATH, xpath)[0].text
print(row_text)
3rd: a bit over 4sec
print(driver_lsx_watchlist.find_element(By.XPATH, "/html/body").text)
4th: 0.2sec
ActionChains(driver_lsx_watchlist)\
.key_down(Keys.CONTROL)\
.send_keys("a")\
.key_up(Keys.CONTROL)\
.key_down(Keys.CONTROL)\
.send_keys("c")\
.key_up(Keys.CONTROL)\
.perform()
Since the clipboard seems to be the fastest of all, but renders my pc useless since the clipboard itself is occupied by the process, i wonder what the best practice would be and if i get a proper solution with under 1 second while using the very same pc.