The requests way you are referring to, is solution which is suitable also for large files. As it was already pointed, small files, you can always download via requst.get
:
import requests
with open("destination.jpg", "wb") as dst_file:
dst_file.write(request.get("http://example.com/img.jpeg").content)
If you want solution that's suitable for large files using requests
, it's not tricky at all.
Actually when you take a look at urllib.retrieve code, you'll see that's under the hood it's doing basically the same operations as you would need to do for requests.get
with stream=True
, except (as pointed in @y0prst answer) it's not checking response status code, so it'll write into local file also content of error responses (HTTP 500 code).
You can define function like:
def requests_retrieve(url, filename, chunk_size=1024):
with open(filename, "wb") as dst_file:
resp = request.get(url, stream=True)
resp.raise_for_status()
for chunk in resp.iter_content(chunk_size)
dst_file.write(chunk)
And call it like urllib.urlretrieve
request_retrieve(url, localName)