0

hey im working on a project that is downloading images from a url without knowing how many there are, they all have the same url but the number changes like www.url000.png www.url0001.png and so on i want to create a loop that download all the png files untill there are no more left then it stops here is my code for the download thanks for helping (first question here btw so sorry if its too long)

count = 1
fnum = str()
if len(str(count)) == 1:
    fnum = '00000' + str(count)
elif len(str(count)) == 2:
    fnum = '0000' + str(count)
elif len(str(count)) == 3:
    fnum = '000' + str(count)
else:
    print('error')

os.chdir(fullpath)
urllib.request.urlretrieve(url + fnum + extention, str(count) + extention)
print('Downloading ' + str(count))
count = count + 1
  • 1
    It looks like (a) you need a loop, and (b) you need to handle exceptions raised by `urlretrieve`. You can find many examples of code that uses `urlretrieve`, including [here on stackoverflow](https://stackoverflow.com/a/2202866/147356). – larsks Aug 14 '21 at 19:34

1 Answers1

0

You could use a built in status code in the requests package to check the validity of a given url.

the code:

url_status = result.status_code  # creates status code variable
print("Status Code: {}".format(url_status))
if url_status >= 200 and url_status < 300:
# successful status codes for http begin with 2--
    print("Link is valid")
else:
    print("Error with the link")

is a conditional that I used to make sure that a link was valid before I tried to scrape from it.

Also, your question was perfectly fine. Not too long at all.

Austin
  • 159
  • 2
  • 9