1

I want to scrape the href of every project from the website https://www.kickstarter.com/discover/advanced?category_id=16&woe_id=23424829&sort=magic&seed=2449064&page=1 with Python 3.5 and BeautifulSoup.

That's my code

#Loading Libraries
import urllib
import urllib.request
from bs4 import BeautifulSoup

#define URL for scraping
theurl = "https://www.kickstarter.com/discover/advanced?category_id=16&woe_id=23424829&sort=magic&seed=2449064&page=1"
thepage = urllib.request.urlopen(theurl)

#Cooking the Soup
soup = BeautifulSoup(thepage,"html.parser")


#Scraping "Link" (href)
project_ref = soup.findAll('h6', {'class': 'project-title'})
project_href = [project.findChildren('a')[0].href for project in project_ref if project.findChildren('a')]
print(project_href)

I get [None, None, .... None, None] back. I need a list with all the href from the class .

Any ideas?

Sebastian Fischer
  • 117
  • 2
  • 2
  • 8

1 Answers1

4

Try something like this:

import urllib.request
from bs4 import BeautifulSoup

theurl = "https://www.kickstarter.com/discover/advanced?category_id=16&woe_id=23424829&sort=magic&seed=2449064&page=1"
thepage = urllib.request.urlopen(theurl)

soup = BeautifulSoup(thepage)

project_href = [i['href'] for i in soup.find_all('a', href=True)]
print(project_href)

This will return all the href instances. As i see in your link, a lot of href tags have # inside them. You can avoid these with a simple regex for proper links, or just ignore the # symboles.

project_href = [i['href'] for i in soup.find_all('a', href=True) if i['href'] != "#"]

This will still give you some trash links like /discover?ref=nav, so if you want to narrow it down use a proper regex for the links you need.

EDIT:

To solve the problem you mentioned in the comments:

soup = BeautifulSoup(thepage)
for i in soup.find_all('div', attrs={'class' : 'project-card-content'}):
    print(i.a['href'])
Gábor Erdős
  • 3,599
  • 4
  • 24
  • 56
  • Oh yes . That works. Thx... Is it possible to get only the hrefs from the class
    ?
    – Sebastian Fischer Jul 25 '16 at 14:57
  • Sure, i will edit my post as soon as i get to work – Gábor Erdős Jul 26 '16 at 07:07
  • please update the code. Thank you for that... – Sebastian Fischer Jul 26 '16 at 11:05
  • Thak you. Now I get a list with the correct hrefs. Thats nice. Do you know what I have to code to get a sting? I mean a result like this: ['href1', 'href2', 'href3',...., 'href10'] because my other data looks like this and I want to export the data to a csv and split them into seperate rows. Thank you so much – Sebastian Fischer Jul 26 '16 at 12:16
  • The code i presented you get the link line by line. You can use `[i.a['href'] for i in soup.find_all('div', attrs={'class' : 'project-card-content'})]` to get it back as a list. – Gábor Erdős Jul 26 '16 at 12:25