2

I'm trying to scrape articles from a certain website using beautifulsoup. I keep getting 'HTTP Error 403: Forbidden" as the output. I was wondering if someone could explain to me how to overcome this? Below is my code:

url: http://magharebia.com/en_GB/articles/awi/features/2014/04/14/feature-03

timestamp = datetime.date.today() 

# Parse HTML of article, aka making soup
soup = BeautifulSoup(urllib2.urlopen(url).read())

# Check if article is from Magharebia.com
# remaining issues: error 403: forbidden. Possible robots.txt? 
# Can't scrape anything atm
if "magharebia.com" in url:

# Create a new file to write content to
#txt = open('%s.txt' % timestamp, "wb")

# Parse HTML of article, aka making soup
soup = BeautifulSoup(urllib2.urlopen(url).read())

# Write the article title to the file    
try:
    title = soup.find("h2")
    txt.write('\n' + "Title: " + str(title) + '\n' + '\n')
except:
    print "Could not find the title!"

# Author/Location/Date
try:
    artinfo = soup.find("h4").text
    txt.write("Author/Location/Date: " + str(artinfo) + '\n' + '\n')
except:
    print "Could not find the article info!" 

# Retrieve all of the paragraphs
tags = soup.find("div", {'class': 'body en_GB'}).find_all('p')
for tag in tags:
    txt.write(tag.text.encode('utf-8') + '\n' + '\n')

# Close txt file with new content added
txt.close()



           Please enter a valid URL: http://magharebia.com/en_GB/articles/awi/features/2014/04       /14/feature-03
Traceback (most recent call last):
  File "idle_test.py", line 18, in <module>
    soup = BeautifulSoup(urllib2.urlopen(url).read())
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 127, in     urlopen
    return _opener.open(url, data, timeout)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 410, in open
    response = meth(req, response)
   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 523, in    http_response
    'http', request, response, code, msg, hdrs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 448, in error
    return self._call_chain(*args)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in     _call_chain
    result = func(*args)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 531, in     http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
user3285763
  • 157
  • 1
  • 3
  • 14

2 Answers2

4

I was able to reproduce the 403 Forbidden error using urllib2, I didn't go deeper into it, however the following worked for me:

import requests
from bs4 import BeautifulSoup

url = "http://magharebia.com/en_GB/articles/awi/features/2014/04/14/feature-03"

soup = BeautifulSoup(requests.get(url).text)

print soup # prints the HTML you are expecting
PepperoniPizza
  • 8,842
  • 9
  • 58
  • 100
  • Would you be able to explain why this worked over the urllib2 urlopen I was previously using? – user3285763 Apr 15 '14 at 02:05
  • The 403 is coming from the site. Probably it doesn't like the default user-agent of urllib2 but doesn't happen to prohibit the "requests" default agent. – mgkrebbs Apr 15 '14 at 08:30
0
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup as soup
url = "your_url"
req = Request(url , headers={'User-Agent': 'Mozilla/5.0'})

webpage = urlopen(req).read()
page_soup = soup(webpage,"lxml")

text = page_soup.get_text()                                    
text = re.sub("\n"," ",text)                                                       
text = re.sub("\t"," ",text)
text = re.sub("\s+"," ",text)
val = re.sub('[^a-zA-Z0-9@_,.$£+]', ' ', text).strip()
val