85

What I want to achieve is to get a website screenshot from any website in python.

Env: Linux

Breakthrough
  • 2,444
  • 2
  • 23
  • 37
Esteban Feldman
  • 3,288
  • 6
  • 32
  • 31
  • 5
    A quick search of the site brings up many, many near-duplicates of this. Here's a good start: http://stackoverflow.com/questions/713938/how-can-i-generate-a-screenshot-of-a-webpage-using-a-server-side-script – Shog9 Jul 28 '09 at 22:55
  • Shog9: Thanks!! your link has some... will check it. – Esteban Feldman Jul 28 '09 at 23:22
  • Shog9: why don't you add it as an answer? so it can give you points. – Esteban Feldman Jul 28 '09 at 23:27
  • 1
    @Esteban: it's not my work - someone else took the time to dig into this and find the resources; i'm just posting links. :-) – Shog9 Jul 29 '09 at 03:29
  • I would suggest leaning towards phantomjs now as per the explanation here as it provides a very clean and robust solution: http://stackoverflow.com/questions/9390493/how-to-take-a-snapshot-of-a-section-of-a-web-page-from-the-shell – ylluminate Feb 22 '12 at 19:16
  • @Shog9 The answer referenced in your first comment has been removed because of "moderation." Thanks! – outis nihil Sep 28 '15 at 18:06

14 Answers14

53

Here is a simple solution using webkit: http://webscraping.com/blog/Webpage-screenshots-with-webkit/

import sys
import time
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *

class Screenshot(QWebView):
    def __init__(self):
        self.app = QApplication(sys.argv)
        QWebView.__init__(self)
        self._loaded = False
        self.loadFinished.connect(self._loadFinished)

    def capture(self, url, output_file):
        self.load(QUrl(url))
        self.wait_load()
        # set to webpage size
        frame = self.page().mainFrame()
        self.page().setViewportSize(frame.contentsSize())
        # render image
        image = QImage(self.page().viewportSize(), QImage.Format_ARGB32)
        painter = QPainter(image)
        frame.render(painter)
        painter.end()
        print 'saving', output_file
        image.save(output_file)

    def wait_load(self, delay=0):
        # process app events until page loaded
        while not self._loaded:
            self.app.processEvents()
            time.sleep(delay)
        self._loaded = False

    def _loadFinished(self, result):
        self._loaded = True

s = Screenshot()
s.capture('http://webscraping.com', 'website.png')
s.capture('http://webscraping.com/blog', 'blog.png')
hoju
  • 28,392
  • 37
  • 134
  • 178
  • Works well, thank you. However, works reliably only if run from the command line. In a django project, one would use subprocess.Popen() – Steve K Nov 21 '12 at 19:52
  • 1
    works fine from within a python web framework. However takes some effort to get webkit working headless. – hoju Nov 22 '12 at 22:46
  • 2
    did anyone experience problems using @hoju ´s method? It does not work on every webpage... – www.pieronigro.de Sep 19 '14 at 14:20
  • what kind of webpage did it fail for? I would expect it to fail if the webpage loaded the content with AJAX or relied on a plugin. – hoju Sep 20 '14 at 21:07
  • Well maybe it works, but installing Webkit is real rocket science especially if you have to do it on multiple systems, therefore I prefer nodejs approach proposed by Aamir Adnan. – Simanas Dec 05 '14 at 08:02
  • apt-get install python-qt4 – hoju Dec 06 '14 at 09:07
  • 1
    i am runing this code in a loop but it's working fine first time and end of the code program is terminated... and get the message Segmentation fault (core dumped) can you please help me to run this code in loop – Manish Patel Dec 29 '14 at 07:49
  • 1
    I just tried to use this method on [Earth :: Global weather map](http://earth.nullschool.net/#current/wind/surface/level/orthographic=-2.11,53.72,672) and it just gives a black image, so its doesnt work well for all web pages. I'm guessing this has something to do with the animation being run on that site? – DrBwts Jan 02 '16 at 12:26
  • I tried @hoju code, but instead of one single url I am passing a list, i mean, inside a for loop... for url in urllist: s.capture(url, filename) but somewhere in the middle all images starts being equal... dispite url is not the same... is there a bug? – Inês Martins Feb 17 '16 at 16:40
  • For a list try this example: https://webscraping.com/blog/Scraping-multiple-JavaScript-webpages-with-webkit/ – hoju Feb 19 '16 at 09:51
  • @hoju, I tried your solution, it works. But the webpage width is narrow down as open in the mobile. I use: self.page().setViewportSize(QSize(width, height)) to resize the page. But may I know is there a way to auto-get the width and height as the original webpage? – zhihong Mar 25 '16 at 12:10
  • Hey @hoju I am using Windows and python version 3.5. I have a python script which I use to scrape data from multiple urls at a single run of the script. How can I take the screenshot of those URLs? Please help – rohit nair Jun 01 '16 at 11:09
  • @hoju I meant the webpages pertaining to those URLs – rohit nair Jun 01 '16 at 11:15
  • 2
    @hoju , Can you please update the code accordingly to PyQt5 ? – Mubeen Butt Jun 10 '20 at 10:15
  • For anyone trying on mac, pyqt4 is not supported on macOS Sierra and above according to this [thread](https://github.com/Homebrew/homebrew-core/issues/1957#issuecomment-225806023). – subtleseeker Jul 26 '20 at 10:47
42

Here is my solution by grabbing help from various sources. It takes full web page screen capture and it crops it (optional) and generates thumbnail from the cropped image also. Following are the requirements:

Requirements:

  1. Install NodeJS
  2. Using Node's package manager install phantomjs: npm -g install phantomjs
  3. Install selenium (in your virtualenv, if you are using that)
  4. Install imageMagick
  5. Add phantomjs to system path (on windows)

import os
from subprocess import Popen, PIPE
from selenium import webdriver

abspath = lambda *p: os.path.abspath(os.path.join(*p))
ROOT = abspath(os.path.dirname(__file__))


def execute_command(command):
    result = Popen(command, shell=True, stdout=PIPE).stdout.read()
    if len(result) > 0 and not result.isspace():
        raise Exception(result)


def do_screen_capturing(url, screen_path, width, height):
    print "Capturing screen.."
    driver = webdriver.PhantomJS()
    # it save service log file in same directory
    # if you want to have log file stored else where
    # initialize the webdriver.PhantomJS() as
    # driver = webdriver.PhantomJS(service_log_path='/var/log/phantomjs/ghostdriver.log')
    driver.set_script_timeout(30)
    if width and height:
        driver.set_window_size(width, height)
    driver.get(url)
    driver.save_screenshot(screen_path)


def do_crop(params):
    print "Croping captured image.."
    command = [
        'convert',
        params['screen_path'],
        '-crop', '%sx%s+0+0' % (params['width'], params['height']),
        params['crop_path']
    ]
    execute_command(' '.join(command))


def do_thumbnail(params):
    print "Generating thumbnail from croped captured image.."
    command = [
        'convert',
        params['crop_path'],
        '-filter', 'Lanczos',
        '-thumbnail', '%sx%s' % (params['width'], params['height']),
        params['thumbnail_path']
    ]
    execute_command(' '.join(command))


def get_screen_shot(**kwargs):
    url = kwargs['url']
    width = int(kwargs.get('width', 1024)) # screen width to capture
    height = int(kwargs.get('height', 768)) # screen height to capture
    filename = kwargs.get('filename', 'screen.png') # file name e.g. screen.png
    path = kwargs.get('path', ROOT) # directory path to store screen

    crop = kwargs.get('crop', False) # crop the captured screen
    crop_width = int(kwargs.get('crop_width', width)) # the width of crop screen
    crop_height = int(kwargs.get('crop_height', height)) # the height of crop screen
    crop_replace = kwargs.get('crop_replace', False) # does crop image replace original screen capture?

    thumbnail = kwargs.get('thumbnail', False) # generate thumbnail from screen, requires crop=True
    thumbnail_width = int(kwargs.get('thumbnail_width', width)) # the width of thumbnail
    thumbnail_height = int(kwargs.get('thumbnail_height', height)) # the height of thumbnail
    thumbnail_replace = kwargs.get('thumbnail_replace', False) # does thumbnail image replace crop image?

    screen_path = abspath(path, filename)
    crop_path = thumbnail_path = screen_path

    if thumbnail and not crop:
        raise Exception, 'Thumnail generation requires crop image, set crop=True'

    do_screen_capturing(url, screen_path, width, height)

    if crop:
        if not crop_replace:
            crop_path = abspath(path, 'crop_'+filename)
        params = {
            'width': crop_width, 'height': crop_height,
            'crop_path': crop_path, 'screen_path': screen_path}
        do_crop(params)

        if thumbnail:
            if not thumbnail_replace:
                thumbnail_path = abspath(path, 'thumbnail_'+filename)
            params = {
                'width': thumbnail_width, 'height': thumbnail_height,
                'thumbnail_path': thumbnail_path, 'crop_path': crop_path}
            do_thumbnail(params)
    return screen_path, crop_path, thumbnail_path


if __name__ == '__main__':
    '''
        Requirements:
        Install NodeJS
        Using Node's package manager install phantomjs: npm -g install phantomjs
        install selenium (in your virtualenv, if you are using that)
        install imageMagick
        add phantomjs to system path (on windows)
    '''

    url = 'http://stackoverflow.com/questions/1197172/how-can-i-take-a-screenshot-image-of-a-website-using-python'
    screen_path, crop_path, thumbnail_path = get_screen_shot(
        url=url, filename='sof.png',
        crop=True, crop_replace=False,
        thumbnail=True, thumbnail_replace=False,
        thumbnail_width=200, thumbnail_height=150,
    )

These are the generated images:

Aamir Rind
  • 38,793
  • 23
  • 126
  • 164
  • Works perfectly in my Django view. No need to set default user-agent, only screen resolution. – serfer2 Jul 09 '14 at 16:42
  • What if a webpage requires certificates for access ?? – Amir Qayyum Khan Jun 09 '15 at 11:33
  • 6
    Question was for Python, not NodeJS. – Zoran Pavlovic Oct 20 '16 at 14:17
  • answer is for Python, not NodeJS, this is how a plethora of companies are doing Virtual test users with Python running things (he could install PhantomJS without Node, but it's far easier to have npm available, especially if you'll be deploying it to a remote system) – Gonçalo Vieira May 30 '18 at 10:30
  • This was a great answer, but PhantomJS is discontinued. You can replace "webdriver.PhantomJS()" – GregD Apr 10 '20 at 14:05
  • This was a great answer, but PhantomJS is discontinued and the call can be replaced by driver = webdriver.Chrome() which requires the installation of chromedriver. Since this will not be headless, it also makes for a slower experience with stuff flashing on screen, but it works. It makes the answer very similar to the good one from Joolah (which is simpler and with fewer dependencies). – GregD Apr 10 '20 at 14:06
32

can do using Selenium

from selenium import webdriver

DRIVER = 'chromedriver'
driver = webdriver.Chrome(DRIVER)
driver.get('https://www.spotify.com')
screenshot = driver.save_screenshot('my_screenshot.png')
driver.quit()

https://sites.google.com/a/chromium.org/chromedriver/getting-started

Thiago
  • 12,778
  • 14
  • 93
  • 110
  • 2
    this is nice and quick. is there a way to get the full page? currently, only the top portion of the page will be saved. E.g., if a page can be scrolled to the bottom, the above will only get the result of scrolling all the way up. – Quetzalcoatl Jan 14 '20 at 00:44
  • 2
    @Quetzalcoatl You can scroll the webpage using `driver.execute_script("window.scrollTo(0, Y)")`. Where 'Y' is the screen height. You may set `screenshot = driver.save_screenshot('my_screenshot.png')` and the above code in a loop until your full webpage gets covered. I am not that sure about this but this logically sounds fine to me. – Shashank Gupta Feb 03 '20 at 06:31
  • 1
    @Quetzalcoatl You can also do `driver.execute_script('document.body.style.zoom = "50%"')` – GregD Apr 10 '20 at 14:17
  • 1
    do we need to have Chrome installed? – cikatomo May 09 '20 at 15:40
  • 1
    @cikatomo yes you do need chrome installed. – Newskooler Jun 10 '20 at 02:13
  • As an aside to this, I made a small wrapper library around Selenium that streamlines the process - https://github.com/wirelessfuture/pywebcapture - it gets the total scroll height of the page – Token Joe Jul 28 '20 at 22:05
20

On the Mac, there's webkit2png and on Linux+KDE, you can use khtml2png. I've tried the former and it works quite well, and heard of the latter being put to use.

I recently came across QtWebKit which claims to be cross platform (Qt rolled WebKit into their library, I guess). But I've never tried it, so I can't tell you much more.

The QtWebKit links shows how to access from Python. You should be able to at least use subprocess to do the same with the others.

ars
  • 120,335
  • 23
  • 147
  • 134
  • 2
    khtml2png is outdated according to the website, [python-webkit2png](https://github.com/adamn/python-webkit2png/) is recommended by them. – sebix Jun 10 '17 at 14:32
13

11 years later...

Taking a website screenshot using Python3.6 and Google PageSpeedApi Insights v5:

import base64
import requests
import traceback
import urllib.parse as ul

# It's possible to make requests without the api key, but the number of requests is very limited  

url = "https://duckgo.com"
urle = ul.quote_plus(url)
image_path = "duckgo.jpg"

key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
strategy = "desktop" # "mobile"
u = f"https://www.googleapis.com/pagespeedonline/v5/runPagespeed?key={key}&strategy={strategy}&url={urle}"

try:
    j = requests.get(u).json()
    ss_encoded = j['lighthouseResult']['audits']['final-screenshot']['details']['data'].replace("data:image/jpeg;base64,", "")
    ss_decoded = base64.b64decode(ss_encoded)
    with open(image_path, 'wb+') as f:
        f.write(ss_decoded) 
except:
    print(traceback.format_exc())
    exit(1)

Notes:

  • Live Demo
  • Pros: Free
  • Cons: Low Resolution
  • Get API Key
  • Docs
  • Limits:
    • Queries per day = 25,000
    • Queries per 100 seconds = 400
Pedro Lobito
  • 94,083
  • 31
  • 258
  • 268
9

Using Rendertron is an option. Under the hood, this is a headless Chrome exposing the following endpoints:

  • /render/:url: Access this route e.g. with requests.get if you are interested in the DOM.
  • /screenshot/:url: Access this route if you are interested in a screenshot.

You would install rendertron with npm, run rendertron in one terminal, access http://localhost:3000/screenshot/:url and save the file, but a demo is available at render-tron.appspot.com making it possible to run this Python3 snippet locally without installing the npm package:

import requests

BASE = 'https://render-tron.appspot.com/screenshot/'
url = 'https://google.com'
path = 'target.jpg'
response = requests.get(BASE + url, stream=True)
# save file, see https://stackoverflow.com/a/13137873/7665691
if response.status_code == 200:
    with open(path, 'wb') as file:
        for chunk in response:
            file.write(chunk)
Michael H.
  • 3,323
  • 2
  • 23
  • 31
  • I like this answer a lot due to its potential, but the documentation on rendertron is pretty poor, so it's difficult to figure out how to use it beyond just your example here. what would an actual, working example look like? Say for someone that just installed rendertron and wants to screenshot this page here? – NL23codes Feb 13 '20 at 09:55
  • Like mentioned above, after you've installed rendertron, you would call `rendertron` on a terminal, then it should listen on port 3000. Then, a screenshot of this very page should be available at http://localhost:3000/screenshot/https://stackoverflow.com/questions/1197172. You can check that by browsing there with your favorite browser, and the code snippet in my answer basically just stores that image to the drive. Of course, you'd have to replace `BASE = 'http://localhost:3000/screenshot/'` and `url = 'https://stackoverflow.com/questions/1197172'`. – Michael H. Feb 17 '20 at 21:22
6

I can't comment on ars's answer, but I actually got Roland Tapken's code running using QtWebkit and it works quite well.

Just wanted to confirm that what Roland posts on his blog works great on Ubuntu. Our production version ended up not using any of what he wrote but we are using the PyQt/QtWebKit bindings with much success.

Note: The URL used to be: http://www.blogs.uni-osnabrueck.de/rotapken/2008/12/03/create-screenshots-of-a-web-page-using-python-and-qtwebkit/ I've updated it with a working copy.

Erick Brown
  • 618
  • 11
  • 22
aezell
  • 1,532
  • 14
  • 17
  • Cool. I think that's the lib I'll try the next time I need something like this. – ars Jul 29 '09 at 04:48
  • We ended up putting a RabbitMQ server on top of it and building some code the control the Xvfb servers and the processes running in them to pseudo-thread the screenshots being built. It runs decently fast with an acceptable amount of memory usage. – aezell Jul 29 '09 at 04:52
5

This is an old question and most answers are a bit dated. Currently, I would do 1 of 2 things.

1. Create a program that takes the screenshots

I would use Pyppeteer to take screenshots of websites. This runs on the Puppeteer package. Puppeteer spins up a headless chrome browser, so the screenshots will look exactly like they would in a normal browser.

This is taken from the pyppeteer documentation:

import asyncio
from pyppeteer import launch

async def main():
    browser = await launch()
    page = await browser.newPage()
    await page.goto('https://example.com')
    await page.screenshot({'path': 'example.png'})
    await browser.close()

asyncio.get_event_loop().run_until_complete(main())

2. Use a screenshot API

You could also use a screenshot API such as this one. The nice thing is that you don't have to set everything up yourself but can simply call an API endpoint.

This is taken from the screenshot API's documentation:

import urllib.parse
import urllib.request
import ssl

ssl._create_default_https_context = ssl._create_unverified_context

# The parameters.
token = "YOUR_API_TOKEN"
url = urllib.parse.quote_plus("https://example.com")
width = 1920
height = 1080
output = "image"

# Create the query URL.
query = "https://screenshotapi.net/api/v1/screenshot"
query += "?token=%s&url=%s&width=%d&height=%d&output=%s" % (token, url, width, height, output)

# Call the API.
urllib.request.urlretrieve(query, "./example.png")
Dirk Hoekstra
  • 942
  • 12
  • 16
4

Using a web service s-shot.ru (so it's not so fast), but quite easy to set up what need through the link configuration. And you can easily capture full page screenshots

import requests
import urllib.parse

BASE = 'https://mini.s-shot.ru/1024x0/JPEG/1024/Z100/?' # you can modify size, format, zoom
url = 'https://stackoverflow.com/'#or whatever link you need
url = urllib.parse.quote_plus(url) #service needs link to be joined in encoded format
print(url)

path = 'target1.jpg'
response = requests.get(BASE + url, stream=True)

if response.status_code == 200:
    with open(path, 'wb') as file:
        for chunk in response:
            file.write(chunk)
Vargan
  • 41
  • 2
  • Awesome, tried a lot of the single code block answers, this was the first one that worked for me on Ubuntu 20.x. – a.t. Apr 06 '22 at 13:29
2

You can use Google Page Speed API to achieve your task easily. In my current project, I have used Google Page Speed API`s query written in Python to capture screenshots of any Web URL provided and save it to a location. Have a look.

import urllib2
import json
import base64
import sys
import requests
import os
import errno

#   The website's URL as an Input
site = sys.argv[1]
imagePath = sys.argv[2]

#   The Google API.  Remove "&strategy=mobile" for a desktop screenshot
api = "https://www.googleapis.com/pagespeedonline/v1/runPagespeed?screenshot=true&strategy=mobile&url=" + urllib2.quote(site)

#   Get the results from Google
try:
    site_data = json.load(urllib2.urlopen(api))
except urllib2.URLError:
    print "Unable to retreive data"
    sys.exit()

try:
    screenshot_encoded =  site_data['screenshot']['data']
except ValueError:
    print "Invalid JSON encountered."
    sys.exit()

#   Google has a weird way of encoding the Base64 data
screenshot_encoded = screenshot_encoded.replace("_", "/")
screenshot_encoded = screenshot_encoded.replace("-", "+")

#   Decode the Base64 data
screenshot_decoded = base64.b64decode(screenshot_encoded)

if not os.path.exists(os.path.dirname(impagepath)):
    try:
        os.makedirs(os.path.dirname(impagepath))
        except  OSError as exc:
            if exc.errno  != errno.EEXIST:
                raise

#   Save the file
with open(imagePath, 'w') as file_:
    file_.write(screenshot_decoded)

Unfortunately, following are the drawbacks. If these do not matter, you can proceed with Google Page Speed API. It works well.

  • The maximum width is 320px
  • According to Google API Quota, there is a limit of 25,000 requests per day
Du-Lacoste
  • 11,530
  • 2
  • 71
  • 51
1

You don't mention what environment you're running in, which makes a big difference because there isn't a pure Python web browser that's capable of rendering HTML.

But if you're using a Mac, I've used webkit2png with great success. If not, as others have pointed out there are plenty of options.

Daniel Naab
  • 22,690
  • 8
  • 54
  • 55
1

I created a library called pywebcapture that wraps selenium that will do just that:

pip install pywebcapture

Once you install with pip, you can do the following to easily get full size screenshots:

# import modules
from pywebcapture import loader, driver

# load csv with urls
csv_file = loader.CSVLoader("csv_file_with_urls.csv", has_header_bool, url_column, optional_filename_column)
uri_dict = csv_file.get_uri_dict()

# create instance of the driver and run
d = driver.Driver("path/to/webdriver/", output_filepath, delay, uri_dict)
d.run()

Enjoy!

https://pypi.org/project/pywebcapture/

Token Joe
  • 177
  • 1
  • 9
-1

Try this..

#!/usr/bin/env python

import gtk.gdk

import time

import random

while 1 :
    # generate a random time between 120 and 300 sec
    random_time = random.randrange(120,300)

    # wait between 120 and 300 seconds (or between 2 and 5 minutes)
    print "Next picture in: %.2f minutes" % (float(random_time) / 60)

    time.sleep(random_time)

    w = gtk.gdk.get_default_root_window()
    sz = w.get_size()

    print "The size of the window is %d x %d" % sz

    pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False,8,sz[0],sz[1])
    pb = pb.get_from_drawable(w,w.get_colormap(),0,0,0,0,sz[0],sz[1])

    ts = time.time()
    filename = "screenshot"
    filename += str(ts)
    filename += ".png"

    if (pb != None):
        pb.save(filename,"png")
        print "Screenshot saved to "+filename
    else:
        print "Unable to get the screenshot."
Taylan Aydinli
  • 4,333
  • 15
  • 39
  • 33
Anand Rajagopal
  • 1,593
  • 6
  • 24
  • 40
-1
import subprocess

def screenshots(url, name):
    subprocess.run('webkit2png -F -o {} {} -D ./screens'.format(name, url), 
      shell=True)
Lulu
  • 21
  • 3
  • 4
    Welcome to Stack Overflow! To make your answer stand out, it would be great to add some explanation of your approach (e.g. what are all of those parameters to `webkit2png`?) and links to documentation. – dspencer Mar 19 '20 at 15:15
  • `webkit2png` is not installed by default –  Sep 24 '20 at 15:20