29

I am using following code to save webpage using Python:

import urllib
import sys
from bs4 import BeautifulSoup

url = 'http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html'
f = urllib.urlretrieve(url,'test.html')

Problem: This code saves html as basic html without javascripts, images etc. I want to save webpage as complete (Like we have option in browser)

Update: I am using following code now to save all the js/images/css files of webapge so that it can be saved as complete webpage but still my output html is getting saved like basic html:

import pycurl
import StringIO

c = pycurl.Curl()
c.setopt(pycurl.URL, "http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html")

b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.perform()
html = b.getvalue()
#print html
fh = open("file.html", "w")
fh.write(html)
fh.close()
  • 1
    Then you would have to write code to parse the HTML, grab all of the linked resources, and download them individually, just like a browser does. – Amber Jan 25 '13 at 06:35
  • using beautiful soup can I do that? –  Jan 25 '13 at 06:37
  • 2
    Try [Scrapy](http://scrapy.org/), an open source portable Python web scrapping framework – Abhijit Jan 25 '13 at 06:38
  • How do I use it? I am very new to programming, I have some experience with Beautiful soup. –  Jan 25 '13 at 06:43
  • Similar: [Is it possible to get complete source code of a website including css by just providing the URL of website? + Python](http://stackoverflow.com/a/13855315/906815) – Annie Lagang Jan 25 '13 at 06:50
  • @AnneLagang I tried using PyCurl without success, please check out the updated code. –  Jan 25 '13 at 07:27
  • Have you tried what @Amber said? In the link I provided, I gave all the steps that can help you get started. – Annie Lagang Jan 25 '13 at 07:46

4 Answers4

23

Try emulating your browser with selenium. This script will pop up the save as dialog for the webpage. You will still have to figure out how to emulate pressing enter for download to start as the file dialog is out of selenium's reach (how you do it is also OS dependent).

from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys

br = webdriver.Firefox()
br.get('http://www.google.com/')

save_me = ActionChains(br).key_down(Keys.CONTROL)\
         .key_down('s').key_up(Keys.CONTROL).key_up('s')
save_me.perform()

Also I think following @Amber suggestion of grabbing the the linked resources may be a simpler, thus a better solution. Still, I think using selenium is a good starting point as br.page_source will get you the entire dom along with the dynamic content generated by javascript.

Ross Smith II
  • 11,799
  • 1
  • 38
  • 43
root
  • 76,608
  • 25
  • 108
  • 120
  • This code is giving me `WindowsError: [Error 2] The system cannot find the file specified` error –  Jan 25 '13 at 08:04
  • @atams -- On what line are you getting the error? I tried it out and it worked on my machine... – root Jan 25 '13 at 08:08
  • I am getting error in this line: 'br = webdriver.Firefox()', Is it because I am using portable version of firefox? –  Jan 28 '13 at 06:28
  • 4
    How are you going to click the 'save' button after Ctrl+S is triggered? – user2262504 May 21 '15 at 09:27
  • How do you press the Save key? – geogeogeo Jul 20 '17 at 07:17
  • 2
    @geogeogeo @user2262504 Just press the Enter key to Save: `save_me = ActionChains(driver).key_down(Keys.ENTER).key_up(Keys.ENTER) save_me.perform()` – Verma Aman Jul 10 '19 at 10:25
  • 2
    And since most JS downloads are async, depending on your site, you may have to include a generous `time.sleep(x)` value after your `get()` request before the `br.page_source` will include the content you're seeking! – Liviu Chircu Jul 13 '21 at 17:42
12

You can easily do that with simple python library pywebcopy.

For Current version: 5.0.1

from pywebcopy import save_webpage

url = 'http://some-site.com/some-page.html'
download_folder = '/path/to/downloads/'    

kwargs = {'bypass_robots': True, 'project_name': 'recognisable-name'}

save_webpage(url, download_folder, **kwargs)

You will have html, css, js all at your download_folder. Completely working like original site.

rajatomar788
  • 447
  • 4
  • 9
  • this library is really useful! is there however, a way to locate the html file of the webpage and launch it in a browser without manually searching for it and locating it? I need to download the complete webpage then launch the page from html file via a `.py` script – tanvi Apr 18 '19 at 15:21
  • Try the version 6 Beta from the github. It automatically opens the html in the browser. https://github.com/rajatomar788/pywebcopy/tree/Beta?files=1 – rajatomar788 May 01 '19 at 05:45
0

To get the script above by @rajatomar788 to run, I had to do all of the following imports first:

To run pywebcopy you will need to install the following packages:

pip install pywebcopy 
pip install pyquery
pip install w3lib
pip install parse 
pip install lxml

After that it worked with a few errors, but I did get the folder filled with the files that make up the webpage.

webpage    - INFO     - Starting save_assets Action on url: 'http://www.gatsby.ucl.ac.uk/teaching/courses/ml1-2016.html'
webpage    - Level 100 - Queueing download of <89> asset files.
Exception in thread <Element(LinkTag, file:///++resource++images/favicon2.ico)>:
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\threading.py", line 917, in _bootstrap_inner
    self.run()
  File "C:\ProgramData\Anaconda3\lib\threading.py", line 865, in run
    self._target(*self._args, **self._kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\pywebcopy\elements.py", line 312, in run
    super(LinkTag, self).run()
  File "C:\ProgramData\Anaconda3\lib\site-packages\pywebcopy\elements.py", line 58, in run
    self.download_file()
  File "C:\ProgramData\Anaconda3\lib\site-packages\pywebcopy\elements.py", line 107, in download_file
    req = SESSION.get(url, stream=True)
  File "C:\ProgramData\Anaconda3\lib\site-packages\pywebcopy\configs.py", line 244, in get
    return super(AccessAwareSession, self).get(url, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 546, in get
    return self.request('GET', url, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 640, in send
    adapter = self.get_adapter(url=request.url)
  File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 731, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'file:///++resource++images/favicon2.ico'

webpage    - INFO     - Starting save_html Action on url: 'http://www.gatsby.ucl.ac.uk/teaching/courses/ml1-2016.html'
Rich Lysakowski PhD
  • 2,702
  • 31
  • 44
0

Try saveFullHtmlPage bellow or adapt it.

Will save a modified *.html and save javascripts, css and images based on the tags script, link and img (tags_inner dict keys) on a folder _files.

import os, sys, re
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup

def saveFullHtmlPage(url, pagepath='page', session=requests.Session(), html=None):
    """Save web page html and supported contents        
        * pagepath : path-to-page   
        It will create a file  `'path-to-page'.html` and a folder `'path-to-page'_files`
    """
    def savenRename(soup, pagefolder, session, url, tag, inner):
        if not os.path.exists(pagefolder): # create only once
            os.mkdir(pagefolder)
        for res in soup.findAll(tag):   # images, css, etc..
            if res.has_attr(inner): # check inner tag (file object) MUST exists  
                try:
                    filename, ext = os.path.splitext(os.path.basename(res[inner])) # get name and extension
                    filename = re.sub('\W+', '', filename) + ext # clean special chars from name
                    fileurl = urljoin(url, res.get(inner))
                    filepath = os.path.join(pagefolder, filename)
                    # rename html ref so can move html and folder of files anywhere
                    res[inner] = os.path.join(os.path.basename(pagefolder), filename)
                    if not os.path.isfile(filepath): # was not downloaded
                        with open(filepath, 'wb') as file:
                            filebin = session.get(fileurl)
                            file.write(filebin.content)
                except Exception as exc:
                    print(exc, file=sys.stderr)
    if not html:
        html = session.get(url).text
    soup = BeautifulSoup(html, "html.parser")
    path, _ = os.path.splitext(pagepath)
    pagefolder = path+'_files' # page contents folder
    tags_inner = {'img': 'src', 'link': 'href', 'script': 'src'} # tag&inner tags to grab
    for tag, inner in tags_inner.items(): # saves resource files and rename refs
        savenRename(soup, pagefolder, session, url, tag, inner)
    with open(path+'.html', 'wb') as file: # saves modified html doc
        file.write(soup.prettify('utf-8'))

Example saving google.com as google.html and contents on google_files folder. (current folder)

saveFullHtmlPage('https://www.google.com', 'google')
imbr
  • 6,226
  • 4
  • 53
  • 65