88

I want to open a url using urllib.request.urlopen('someurl'):

with urllib.request.urlopen('someurl') as url:
b = url.read()

I keep getting the following error:

urllib.error.HTTPError: HTTP Error 403: Forbidden

I understand the error to be due to the site not letting python access it, to stop bots wasting their network resources- which is understandable. I went searching and found that you need to change the user agent for urllib. However all the guides and solutions I have found for this issue as to how to change the user agent have been with urllib2, and I am using python 3 so all the solutions don't work.

How can I fix this problem with python 3?

Asclepius
  • 57,944
  • 17
  • 167
  • 143
user3662991
  • 1,083
  • 1
  • 11
  • 11
  • a [403 error](http://pcsupport.about.com/od/findbyerrormessage/a/403error.htm) may not be due to your user-agent. – hd1 Jun 15 '14 at 05:30

4 Answers4

114

From the Python docs:

import urllib.request
req = urllib.request.Request(
    url, 
    data=None, 
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
    }
)

f = urllib.request.urlopen(req)
print(f.read().decode('utf-8'))
Community
  • 1
  • 1
Martin Konecny
  • 57,827
  • 19
  • 139
  • 159
37
from urllib.request import urlopen, Request

urlopen(Request(url, headers={'User-Agent': 'Mozilla'}))
Collin Anderson
  • 14,787
  • 6
  • 68
  • 57
  • 1
    This is important. I had to import urllib.request not simply urllib. Everything else in the accepted answer works with this modification. – wrkyle Jan 24 '16 at 03:21
  • 1
    Yeah, you do, but the accepted answer doesn't so I wanted to draw attention to your answer because it addresses a flaw in the accepted one. – wrkyle Jan 26 '16 at 07:08
10

I just answered a similar question here: https://stackoverflow.com/a/43501438/206820

In case you just not only want to open the URL, but also want to download the resource(say, a PDF file), you can use the code as below:

    # proxy = ProxyHandler({'http': 'http://192.168.1.31:8888'})
    proxy = ProxyHandler({})
    opener = build_opener(proxy)
    opener.addheaders = [('User-Agent','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.1 Safari/603.1.30')]
    install_opener(opener)

    result = urlretrieve(url=file_url, filename=file_name)

The reason I added proxy is to monitor the traffic in Charles, and here is the traffic I got:

See the User-Agent

Community
  • 1
  • 1
Tonny Xu
  • 2,162
  • 2
  • 16
  • 20
2

The host site rejection is coming from the OWASP ModSecurity Core Rules for Apache mod-security. Rule 900002 has a list of "bad" user agents, and one of them is "python-urllib2". That's why requests with the default user agent fail.

Unfortunately, if you use Python's "robotparser" function,

https://docs.python.org/3.5/library/urllib.robotparser.html?highlight=robotparser#module-urllib.robotparser

it uses the default Python user agent, and there's no parameter to change that. If "robotparser"'s attempt to read "robots.txt" is refused (not just URL not found), it then treats all URLs from that site as disallowed.

John Nagle
  • 1,530
  • 1
  • 14
  • 15