I want to scrape all links that a website has and want to filter them out so that I can wget them later.
The problem is given a URL lets say
URL = "https://stackoverflow.com/questions/"
my scraper should scrape and provide url's such as
https://stackoverflow.com/questions/51284071/how-to-get-all-the-link-in-page-using-selenium-python
https://stackoverflow.com/questions/36927366/how-to-get-the-link-to-all-the-pages-of-a-website-for-data-scrapping
https://stackoverflow.com/questions/46468032/python-selenium-automatically-load-more-pages
Currently, I have borrowed code from StackOverflow
import requests
from bs4 import BeautifulSoup
def recursiveUrl(url, link, depth):
if depth == 10:
return url
else:
# print(link['href'])
page = requests.get(url + link['href'])
soup = BeautifulSoup(page.text, 'html.parser')
newlink = soup.find('a')
if len(newlink) == 0:
return link
else:
return link, recursiveUrl(url, newlink, depth + 1)
def getLinks(url):
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
links = soup.find_all('a')
for link in links:
try:
links.append(recursiveUrl(url, link, 0))
except Exception as e:
pass
return links
links = getLinks("https://www.businesswire.com/portal/site/home/news/")
print(links)
And I think instead of going through all pages it is going through all hyperlinks provided in the webpage .
I have also referred to this
link = "https://www.businesswire.com/news"
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request
DOMAIN = link
URL = 'http://%s' % DOMAIN
class MySpider(BaseSpider):
name = DOMAIN
allowed_domains = [DOMAIN]
start_urls = [
URL
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
for url in hxs.select('//a/@href').extract():
if not ( url.startswith('http://') or url.startswith('https://') ):
url= URL + url
print (url)
yield Request(url, callback=self.parse)
But this is too old and is not functioning.
Scraping is new to me so I might be stuck in some basic funda.
Let me know how to fire up this problem.