Reading huge data from requests and saving it to a file. This works for < 1G of data but for more than 1GB to 5GB it takes a huge time and I have not seen the data saved to a file which gives me connection errors.
Piece of Code I tried:
with request.get(url....) as r:
with open(file ,‘wb’) as f:
for chunk in r.iter_content(chunk_size = 10000):
if chunk:
f.write(chunk)
f.flush()
Any suggestions here to accelerate the download process to save it to a file will be helpful. I tried with different chunk size and commenting flush but not much improvement.
with request.get(url....) as r:
with open(file ,‘wb’) as f:
for chunk in r.iter_content(chunk_size = 10000):
if chunk:
f.write(chunk)
f.flush()
This gives results for less than 1GB of data but for more than 1 GB of data it takes huge time and gives an error of connection from the source from where we fetched the data using requests.