49

I am practicing Selenium in Python and I wanted to fetch all the links on a web page using Selenium.

For example, I want all the links in the href= property of all the <a> tags on http://psychoticelites.com/

I've written a script and it is working. But, it's giving me the object address. I've tried using the id tag to get the value, but, it doesn't work.

My current script:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys


driver = webdriver.Firefox()
driver.get("http://psychoticelites.com/")

assert "Psychotic" in driver.title

continue_link = driver.find_element_by_tag_name('a')
elem = driver.find_elements_by_xpath("//*[@href]")
#x = str(continue_link)
#print(continue_link)
print(elem)
Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
Xonshiz
  • 1,307
  • 2
  • 20
  • 48

11 Answers11

105

Well, you have to simply loop through the list:

elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))

find_elements_by_* returns a list of elements (note the spelling of 'elements'). Loop through the list, take each element and fetch the required attribute value you want from it (in this case href).

Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
JRodDynamite
  • 12,325
  • 5
  • 43
  • 63
  • 4
    why is it that all the documentation says xpath is "not recommended" but most of the answers on stackoverflow use xpath? – Ywapom Feb 28 '18 at 01:43
  • 2
    XPath is NOT reliable. If the DOM of the website changes, so does the XPath and your script is bound to crash then. After working with multiple scripts on scrapping, I've come to a conclusion that use XPath as a last resort. – Xonshiz Mar 13 '18 at 15:08
  • 2
    short xpaths like in this example they are reliable, I use lots of `driver.find_element_by_xpath("//*[@id='']")` if xpath become long strings depending on columns/rows/divs etc that relies on layout they should not be used. – MortenB May 21 '19 at 18:47
  • What if I need to return href's that belong to a specific class? – GodSaveTheDucks Aug 23 '19 at 13:21
  • 1
    You can use this to get elements based on their Class Name `driver.find_elements_by_class_name("content")`, where "content" is the name of the class you're looking for. – Xonshiz Mar 25 '21 at 12:55
  • .get_attribute is not available anymore, what's the new one – Sky Jul 10 '22 at 15:07
  • @Sky - Its still `get_attribute` in the [docs](https://www.selenium.dev/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webelement.html#selenium.webdriver.remote.webelement.WebElement.get_attribute) as well. – JRodDynamite Jul 13 '22 at 07:39
  • `AttributeError: 'WebDriver' object has no attribute 'find_elements_by_xpath'` – Sridhar Sarnobat Jul 25 '22 at 02:36
9

I have checked and tested that there is a function named find_elements_by_tag_name() you can use. This example works fine for me.

elems = driver.find_elements_by_tag_name('a')
    for elem in elems:
        href = elem.get_attribute('href')
        if href is not None:
            print(href)
Gabriel Chung
  • 1,527
  • 1
  • 18
  • 30
  • This creates a `StaleElementReferenceException` error for me on the line `href=elem.get_attribute('href')`. I tried printing the elem to the console before I access it to get the attribute but that just moves the exception to the line trying to print. this is the exact message: `stale element reference: element is not attached to the page document` Edit: forgot to press shift enter so I did not have the message. corrected in edit – ItIsEntropy Mar 09 '21 at 06:49
  • get_attribute is not working, what's the new method in seleium python ? – Sky Jul 10 '22 at 15:14
  • @Sky `get_attribute` still works. `find_elements_by_***` does not. See my updated posted answer. – DataMinion Aug 09 '22 at 11:53
5
driver.get(URL)
time.sleep(7)
elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))
driver.close()

Note: Adding delay is very important. First run it in debug mode and Make sure your URL page is getting loaded. If the page is loading slowly, increase delay (sleep time) and then extract.

If you still face any issues, please refer below link (explained with an example) or comment

Extract links from webpage using selenium webdriver

srinivas s
  • 71
  • 1
  • 2
  • I think the hint to the sleep command is helpful otherwise it is redundant to the accepted answer. – mrk Jun 18 '21 at 09:50
  • the Sleep command is completely relevant. Without it, you can't pick any href attributes because there was no time to load it. Upvoted this solution! – Logic Nov 08 '22 at 03:44
3

You can try something like:

    links = driver.find_elements_by_partial_link_text('')
Shawn
  • 571
  • 7
  • 8
2

You can import the HTML dom using html dom library in python. You can find it over here and install it using PIP:

https://pypi.python.org/pypi/htmldom/2.0

from htmldom import htmldom
dom = htmldom.HtmlDom("https://www.github.com/")  
dom = dom.createDom()

The above code creates a HtmlDom object.The HtmlDom takes a default parameter, the url of the page. Once the dom object is created, you need to call "createDom" method of HtmlDom. This will parse the html data and constructs the parse tree which then can be used for searching and manipulating the html data. The only restriction the library imposes is that the data whether it is html or xml must have a root element.

You can query the elements using the "find" method of HtmlDom object:

p_links = dom.find("a")  
for link in p_links:
  print ("URL: " +link.attr("href"))

The above code will print all the links/urls present on the web page

Python_Novice
  • 170
  • 4
  • 16
2

Unfortunately, the original link posted by OP is dead...

If you're looking for a way to scrape links on a page, here's how you can scrape all of the "Hot Network Questions" links on this page with gazpacho:

from gazpacho import Soup

url = "https://stackoverflow.com/q/34759787/3731467"

soup = Soup.get(url)
a_tags = soup.find("div", {"id": "hot-network-questions"}).find("a")

[a.attrs["href"] for a in a_tags]
emehex
  • 9,874
  • 10
  • 54
  • 100
1

You can do this by using BeautifulSoup with very easy and efficient way. I have tested the below codes and worked fine for the same purpose.

After this line -

driver.get("http://psychoticelites.com/")

use the below code -

response = requests.get(browser.current_url)
soup = BeautifulSoup(response.content, 'html.parser')
for link in soup.find_all('a'):
    if link.get('href'):
       print(link.get("href"))
       print('\n')
Suman Das
  • 11
  • 2
1

All of the accepted answers using Selenium's driver.find_elements_by_*** no longer work with Selenium 4. The current method is to use find_elements() with the By class.

Method 1: For loop

The below code utilizes 2 lists. One for By.XPATH and the other, By.TAG_NAME. One can use either-or. Both are not needed.

By.XPATH IMO is the easiest as it does not return a seemingly useless None value like By.TAG_NAME does. The code also removes duplicates.

from selenium.webdriver.common.by import By

driver.get("https://www.amazon.com/")

href_links = []
href_links2 = []

elems = driver.find_elements(by=By.XPATH, value="//a[@href]")
elems2 = driver.find_elements(by=By.TAG_NAME, value="a")

for elem in elems:
    l = elem.get_attribute("href")
    if l not in href_links:
        href_links.append(l)

for elem in elems2:
    l = elem.get_attribute("href")
    if (l not in href_links2) & (l is not None):
        href_links2.append(l)

print(len(href_links))  # 360
print(len(href_links2))  # 360

print(href_links == href_links2)  # True

Method 2: List Comprehention

If duplicates are OK, one liner list comprehension can be used.

from selenium.webdriver.common.by import By

driver.get("https://www.amazon.com/")

elems = driver.find_elements(by=By.XPATH, value="//a[@href]")
href_links = [e.get_attribute("href") for e in elems]

elems2 = driver.find_elements(by=By.TAG_NAME, value="a")
# href_links2 = [e.get_attribute("href") for e in elems2]  # Does not remove None values
href_links2 = [e.get_attribute("href") for e in elems2 if e.get_attribute("href") is not None]

print(len(href_links))  # 387
print(len(href_links2))  # 387

print(href_links == href_links2)  # True
DataMinion
  • 380
  • 3
  • 10
0
import requests
from selenium import webdriver
import bs4
driver = webdriver.Chrome(r'C:\chromedrivers\chromedriver') #enter the path
data=requests.request('get','https://google.co.in/') #any website
s=bs4.BeautifulSoup(data.text,'html.parser')
for link in s.findAll('a'):
    print(link)
0

Update for the existing solving Post: For the current version it needs to be:

elems = driver.find_elements_by_xpath("//a[@href]")
for elem in elems:
    print(elem.get_attribute("href"))
0

For 2023:

url = "https://example.com"
driver.get(url)
raw_links = driver.find_elements(By.XPATH, '//a [@href]')
for link in raw_links:
    l = link.get_attribute("href")
    print("raw_link:{}".format(l))
Chris
  • 18,075
  • 15
  • 59
  • 77