12

hello there i was wondering if it was possible to connect to a http host (I.e. for example google.com) and download the source of the webpage?

Thanks in advance.

DonJuma
  • 2,028
  • 13
  • 42
  • 70

5 Answers5

14

Using urllib2 to download a page.

Google will block this request as it will try to block all robots. Add user-agent to the request.

import urllib2
user_agent = 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_4; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.472.63 Safari/534.3'
headers = { 'User-Agent' : user_agent }
req = urllib2.Request('http://www.google.com', None, headers)
response = urllib2.urlopen(req)
page = response.read()
response.close() # its always safe to close an open connection

You can also use pyCurl

import sys
import pycurl

class ContentCallback:
        def __init__(self):
                self.contents = ''

        def content_callback(self, buf):
                self.contents = self.contents + buf

t = ContentCallback()
curlObj = pycurl.Curl()
curlObj.setopt(curlObj.URL, 'http://www.google.com')
curlObj.setopt(curlObj.WRITEFUNCTION, t.content_callback)
curlObj.perform()
curlObj.close()
print t.contents
Community
  • 1
  • 1
pyfunc
  • 65,343
  • 15
  • 148
  • 136
  • The urllib2 module has been split across several modules in Python 3 named urllib.request and urllib.error. So with the code above you'll get a 'no module urllib2' error. For the updated answer, see https://stackoverflow.com/questions/2792650/import-error-no-module-name-urllib2 – Joris Mar 04 '18 at 17:22
7

You can use urllib2 module.

import urllib2
url = "http://somewhere.com"
page = urllib2.urlopen(url)
data = page.read()
print data

See the doc for more examples

Steven Bluen
  • 141
  • 2
  • 11
ghostdog74
  • 327,991
  • 56
  • 259
  • 343
2

The documentation of httplib (low-level) and urllib (high-level) should get you started. Choose the one that's more suitable for you.

AndiDog
  • 68,631
  • 21
  • 159
  • 205
2

Using requests package:

# Import requests
import requests

#url
url = 'https://www.google.com/'

# Create the binary string html containing the HTML source
html = requests.get(url).content

or with the urllib

from urllib.request import urlopen

#url
url = 'https://www.google.com/'

# Create the binary string html containing the HTML source
html = urlopen(url).read()
0

so here's another approach to this problem using mechanize. I found this to bypass a website's robot checking system. i commented out the set_all_readonly because for some reason it wasn't recognized as a module in mechanize.

import mechanize
url = 'http://www.example.com'

br = mechanize.Browser()
#br.set_all_readonly(False)    # allow everything to be written to
br.set_handle_robots(False)   # ignore robots
br.set_handle_refresh(False)  # can sometimes hang without this
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]           # [('User-agent', 'Firefox')]
response = br.open(url)
print response.read()      # the text of the page
response1 = br.response()  # get the response again
print response1.read()     # can apply lxml.html.fromstring()
tisaconundrum
  • 2,156
  • 2
  • 22
  • 37