0

I would like to find and visit all the links on a page using Python Selenium. I am getting the following error.

Traceback (most recent call last): File "C:\Users\Acer\PycharmProjects\selenium-rpa\main.py", line 24, in print(elem.get_attribute("href")) AttributeError: 'str' object has no attribute 'get_attribute'. Did you mean: 'getattribute'?

My code:

from selenium import webdriver
from datetime import datetime
import requests, urllib3
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service

PATH = Service("C:\chromedriver.exe")
url = "http://localhost/rpa_anomaly/test.php"
browser = webdriver.Chrome(service=PATH)
browser.get(url)

elems = browser.find_element(By.XPATH, "//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))

same problem here but my selenium version is newer. so I can't use it like below

elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))

Fetch all href link using selenium in python

senjizu
  • 103
  • 12

1 Answers1

0

You can try with regular method of tagname

from selenium import webdriver
from webdriver_manager.microsoft import EdgeChromiumDriverManager

driver = webdriver.Edge(EdgeChromiumDriverManager().install())
driver.get("http://localhost/rpa_anomaly/test.php")
# identify elements with tagname <a>
lnks = driver.find_elements_by_tag_name("a")
# traverse list
for lnk in lnks:

    print(lnk.get_attribute("href"))
driver.quit()