I would like to find and visit all the links on a page using Python Selenium. I am getting the following error.
Traceback (most recent call last): File "C:\Users\Acer\PycharmProjects\selenium-rpa\main.py", line 24, in print(elem.get_attribute("href")) AttributeError: 'str' object has no attribute 'get_attribute'. Did you mean: 'getattribute'?
My code:
from selenium import webdriver
from datetime import datetime
import requests, urllib3
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service
PATH = Service("C:\chromedriver.exe")
url = "http://localhost/rpa_anomaly/test.php"
browser = webdriver.Chrome(service=PATH)
browser.get(url)
elems = browser.find_element(By.XPATH, "//a[@href]")
for elem in elems:
print(elem.get_attribute("href"))
same problem here but my selenium version is newer. so I can't use it like below
elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
print(elem.get_attribute("href"))