I am downloading few images from a website that has its images url in sequence format.
Every time I run the loop it gets stuck at any random point for example after 50 images or sometimes after downloading 70 images and it doesn't throw any type of error and just get stuck
. How can I handle this to make sure I download everything in 1 go without any stops ?
import urllib.request
from urllib.error import HTTPError
from urllib.error import URLError
import time
for i in range(1,240):
filepath = r"C:\Users\....\Image collector\\"
filename = f'{i:03}'+".jpg"
fullpath = filepath+filename
print(fullpath)
url = "https://www.somerandomxyzwebsite.com/abc_"+f'{i:03}'+".jpg"
try:
urllib.request.urlretrieve(url, fullpath)
except HTTPError as e:
print(e)
except URLError:
print("Server down or incorrect domain")
print('done')
Can I include a condition (if execution time > 1 min) then hit the same url again. Is it a right approach or is there other approach to handle this situation ?