0

I am looking for a python download method where I can download the file directly to disk using a specified destination and file name, and if the download takes too long, time out.

From here it looks like there's three main python download options

https://stackabuse.com/download-files-with-python/

urllib.request.urlretrieve

does not have a timeout option.

requests.get(url)

Has timeout, but requires the file be opened before saving it. It also looks like it gets the file name from the url.

wget.download

also doesn't seem to have a timeout option.

Are there any python download methods that satisfy all three of my requirements?

petezurich
  • 9,280
  • 9
  • 43
  • 57
SantoshGupta7
  • 5,607
  • 14
  • 58
  • 116
  • 1
    Why do you have the requirement of not opening before saving? – vekerdyb Jun 26 '19 at 19:22
  • There may be some files that Python may not know how to handle, like videos, pictures, compressed, programs, etc. And Python may run into an error trying to open them. Also it may same on RAM and time. – SantoshGupta7 Jun 26 '19 at 19:33
  • "Opening" a file is asking the OS to allocate disk space and allow your program to write data. All files can be represented as binary streams, so what is in the file is not important. You can stream to a file in chunks to avoid memory concerns with large files. https://stackoverflow.com/a/39217788/1617748 – vekerdyb Jun 26 '19 at 19:42
  • Oh interesting. I don't have any large files, but I see now that my requirement of not opening the file is misplaces, as it looks like all 3 download methods open the file at some open. – SantoshGupta7 Jun 26 '19 at 19:44

1 Answers1

2

I don't think you can write to a file without opening it.

urllib opens it too.

vekerdyb
  • 1,213
  • 12
  • 26
  • oh interesting, I'm guessing it's the same for wget as well? If so, I am wondering what is the fundamental difference between the three methods. – SantoshGupta7 Jun 26 '19 at 19:31