3

I found this post and wanted to modify the script slightly to download the images to a specific folder. My edited file looks like this:

import re
import requests
from bs4 import BeautifulSoup
import os

site = 'http://pixabay.com'
directory = "pixabay/" #Relative to script location

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

for url in urls:
    #print(url)
    filename = re.search(r'/([\w_-]+[.](jpg|gif|png))$', url)

    with open(os.path.join(directory, filename.group(1)), 'wb') as f:
        if 'http' not in url:
            url = '{}{}'.format(site, url)
        response = requests.get(url)
        f.write(response.content)

This seems to work fine for pixabay, but if I try a different site like imgur or heroimages, it doesn't seem to work. If I replace the site declaration with

site = 'http://heroimages.com/portfolio'

nothing is downloaded. The print statement (when uncommented) doesn't print anything, so I'm guessing it's not finding any image tags? I'm not sure.

On the other hand, if I replace site with

site = 'http://imgur.com'

I sometimes get a

AttributeError: 'NoneType' object has no attribute 'group'

or, if the images do download, I can't even open them because I get the following error: not supported file format

Also worth noting, right now the script requires the folder specified by directory to exist. I plan on changing it in the future so that the script creates the directory, if it does not exist already.

Druta Ruslan
  • 7,171
  • 2
  • 28
  • 38
Toj19
  • 129
  • 1
  • 2
  • 9

1 Answers1

6

you need to wait for javascript to laod the page, i think in this is the problem, if you want you can use selenium

# your imports
...
from selenium import webdriver

site = 'http://heroimages.com/portfolio'
directory = "pixabay/" #Relative to script location

driver = webdriver.Chrome('/usr/local/bin/chromedriver')

driver.get(site)

soup = BeautifulSoup(driver.page_source, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

for url in urls:
    print(url)
    # your code
    ...

Output

# from `http://heroimages.com/portfolio`
https://ssl.c.photoshelter.com/img-get2/I00004gQScPHUm5I/sec=wdtsdfoeflwefms1440ed201806304risXP3bS2xDXil/fill=350x233/361-03112.jpg
https://ssl.c.photoshelter.com/img-get2/I0000h9YWTlnCxXY/sec=wdtsdfoeflwefms1440ed20180630Nq90zU4qg6ukT5K/fill=350x233/378-01449.jpg
https://ssl.c.photoshelter.com/img-get2/I0000HNg_JtT_QrQ/sec=wdtsdfoeflwefms1440ed201806304CZwwO1L641maB9/fill=350x233/238-1027-hro-3552.jpg
https://ssl.c.photoshelter.com/img-get2/I00000LWwYspqXuk/sec=wdtsdfoeflwefms1440ed201806302BP_NaDsGb7udq0/fill=350x233/258-02351.jpg
# and many others images

Also the script that check if the directory exists, if it don't exist create it.

...
directory = os.path.dirname(os.path.realpath(__file__)) + '/pixabay/'    
if not os.path.exists(directory):
    os.makedirs(directory)
...                  
Druta Ruslan
  • 7,171
  • 2
  • 28
  • 38