curl has an option to directly save file and header data on disk:
curl_setopt($curl_obj, CURLOPT_WRITEHEADER, $header_handle);
curl_setopt($curl_obj, CURLOPT_FILE, $file_handle);
Is there same ability in python-requests ?
curl has an option to directly save file and header data on disk:
curl_setopt($curl_obj, CURLOPT_WRITEHEADER, $header_handle);
curl_setopt($curl_obj, CURLOPT_FILE, $file_handle);
Is there same ability in python-requests ?
As far as I know, requests does not provide a function that save content to a file.
import requests
with open('local-file', 'wb') as f:
r = requests.get('url', stream=True)
f.writelines(r.iter_content(1024))
See request.Response.iter_content documentation.
iter_content(chunk_size=1, decode_unicode=False)
Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place.
if you're saving something that is not a textfile, don't use f.writelines()
. Instead use one of these:
import requests
try:
r = requests.get(chosen, stream=True)
except Exception as E:
print(E)
# handle exceptions here.
# both methods work here...
with open(filepath, 'wb') as handle:
for block in r.iter_content(1024):
handle.write(block)
# or...
import shutil
with open(filepath, 'wb') as handle:
shutil.copyfileobj(r.raw, handle)
shutil
is much more flexible for dealing with missing folders or recursive file copying, etc. And it allows you to save the raw data from requests without worrying about blocksize and all that.