1

I'm trying to write a Python script to download images from any website. It is working, but inconsistently. Specifically, find_all("img") is not doing so for the second url. The script is:

# works for http://proof.nationalgeographic.com/2016/02/02/photo-of-the-day-best-of-january-3/
# but not http://www.nationalgeographic.com/photography/proof/2017/05/lake-chad-desertification/
import requests
from PIL import Image
from io import BytesIO
from bs4 import BeautifulSoup

def url_to_image(url, filename):
    # get HTTP response, open as bytes, save the image
    # http://docs.python-requests.org/en/master/user/quickstart/#binary-response-content
    req = requests.get(url)
    i = Image.open(BytesIO(req.content))
    i.save(filename)

# open page, get HTML request and parse with BeautifulSoup
html = requests.get("http://proof.nationalgeographic.com/2016/02/02/photo-of-the-day-best-of-january-3/")
soup = BeautifulSoup(html.text, "html.parser")

# find all JPEGS in our soup and write their "src" attribute to array
urls = []
for img in soup.find_all("img"):
    if img["src"].endswith("jpg"):
        print("endswith jpg")
        urls.append(str(img["src"]))
    print(str(img))

jpeg_no = 00
for url in urls:
    url_to_image(url, filename="NatGeoPix/" + str(jpeg_no) + ".jpg")
    jpeg_no += 1
Sank Finatra
  • 334
  • 2
  • 10

1 Answers1

1

The images are rendered with JavaScript on the page that is failing. First render the page with dryscrape

(If you don't want to use dryscrape see Web-scraping JavaScript page with Python )

e.g.

import requests
from PIL import Image
from io import BytesIO
from bs4 import BeautifulSoup
import dryscrape

def url_to_image(url, filename):
    # get HTTP response, open as bytes, save the image
    # http://docs.python-requests.org/en/master/user/quickstart/#binary-response-content
    req = requests.get(url)
    i = Image.open(BytesIO(req.content))
    i.save(filename)

# open page, get HTML request and parse with BeautifulSoup

session = dryscrape.Session()
session.visit("http://www.nationalgeographic.com/photography/proof/2017/05/lake-chad-desertification/")
response = session.body()
soup = BeautifulSoup(response, "html.parser")

# find all JPEGS in our soup and write their "src" attribute to array
urls = []
for img in soup.find_all("img"):
    if img["src"].endswith("jpg"):
        print("endswith jpg")
        urls.append(str(img["src"]))
        print(str(img))

jpeg_no = 00
for url in urls:
    url_to_image(url, filename="NatGeoPix/" + str(jpeg_no) + ".jpg")
    jpeg_no += 1

But I would also check that you have an absolute URL not a relative one:

import requests
from PIL import Image
from io import BytesIO
from bs4 import BeautifulSoup
import dryscrape
from urllib.parse import urljoin


def url_to_image(url, filename):
    # get HTTP response, open as bytes, save the image
    # http://docs.python-requests.org/en/master/user/quickstart/#binary-response-content
    req = requests.get(url)
    i = Image.open(BytesIO(req.content))
    i.save(filename)

# open page, get HTML request and parse with BeautifulSoup
base = "http://www.nationalgeographic.com/photography/proof/2017/05/lake-chad-desertification/"
session = dryscrape.Session()
session.visit(base)
response = session.body()
soup = BeautifulSoup(response, "html.parser")

# find all JPEGS in our soup and write their "src" attribute to array
urls = []
for img in soup.find_all("img"):
    if img["src"].endswith("jpg"):
        print("endswith jpg")
        urls.append(str(img["src"]))
        print(str(img))

jpeg_no = 00
for url in urls:
    if url.startswith( 'http' ):
        absoute = url
    else:
        absoute = urljoin(base, url)
    print (absoute)
    url_to_image(absoute, filename="NatGeoPix/" + str(jpeg_no) + ".jpg")
    jpeg_no += 1 
Community
  • 1
  • 1
Dan-Dev
  • 8,957
  • 3
  • 38
  • 55
  • Or Selenium with PhantomJS or Google Chrome supports headless (didn't try it) – innicoder May 16 '17 at 03:37
  • How could you tell the images were rendered with JS? – Sank Finatra May 16 '17 at 17:34
  • If I turned off JavaScript in Firefox using the web-developer toolbar the images don't show. Also if I looked at the page source (not generated source) I could not see the image elements in the HTML but can see lots of references in JavaScript. Using the method above I was able to scrape the images. – Dan-Dev May 16 '17 at 17:42