So I have a working downloader built using the requests module in python. The problem is it seems the server might be timing out when downloading a large file and when downloading, it just stop iterating over the content. No errors are printed as requests just thinks it is at the end of the content. Here is the code for the downloader:
def requests_download(url:str, file_name:str):
responce = requests.get(url=url, stream=True, cookies=args['cookies'])
with open(file_name, 'wb') as f:
for chunk in responce.iter_content(chunk_size=1024*64):
f.write(chunk)
I have tried adding other headers such as connection : keep-alive and have tried using my own user agent and neither seemed to help. I'd rather use stream=True as some of these files can be large and I don't want them all loaded into memory. I have only seen one other post with a similar issue they also believed that the server was timing them out. I'd just like to get confirmation at least that the problem isn't something I can fix but would need to be fixed on the servers side. Please don't recommend solutions that use other modules besides maybe urllib, though I imagine urllib would have the same problem. Sorry I don't have much of a way to reproduce the problem. Maybe slowing your connection down really slow and try downloading a relatively large video file? Also a side question but what is the recommended chunk_size that should be used for small to large files?