0

I wrote a python script that downloads files from the internet. However everytime I run the script, it seems like my computer is frozen.

Codes:

response = requests.get(url, stream=True)

with open(local_filename, 'wb') as f:
     for chunk in response.iter_content(chunk_size=2048):
         if chunk:  

              f.write(chunk)

              f.flush()

What can I do to this to load so my computer doesn't freeze?

Should I allocate a limited amount of ram? Or should I create a thread to do this?

Any advice would be appreciated. ty.

tipsywacky
  • 3,374
  • 5
  • 43
  • 75
  • It could be that you're computer can't reach the site, and because I think get() is a blocking call, this could appear like it's frozen. Can you show us more of the code? Do you have any proof that you're reaching the site that you're trying to download from? – SH7890 Jun 14 '17 at 13:01
  • 1
    Well python is not C. The reason that your computer appears to freeze is because you have a very tight loop that doesn't exit Is this your real code? – e4c5 Jun 14 '17 at 13:01
  • 1
    why not control your resources from the shell before you execute your script, for example in linux `nice ` or `cpulimit` can control your resources – suvy Jun 14 '17 at 13:04
  • my guess is that it's a very large file... are you working with a small amount of RAM or an old processor? to test my theory, try `for chunk in tqdm.tqdm(response.iter_content(chunk_size=2048):` – Zev Averbach Jun 14 '17 at 13:04
  • 1
    As e4c5 states, above code does not exit, so it will probably only terminate after maybe keyboard interrupt? Also see the following question and answer. Looks to be what you want to do. https://stackoverflow.com/questions/16694907/how-to-download-large-file-in-python-with-requests-py – Maarten Jun 14 '17 at 13:04
  • @SH7890 yes it seems like a blocking call, I got the file downloaded and saved. It just seems like my whole computer got frozen when downloading it. thought my computer is duel core. – tipsywacky Jun 14 '17 at 13:09
  • @e4c5 yes it's the real code. I read it from https://stackoverflow.com/questions/16694907/how-to-download-large-file-in-python-with-requests-py – tipsywacky Jun 14 '17 at 13:09
  • @ZevAverbach it's pretty big, i had 4 gb ram. But not sure if there is a proper async way to make the download more responsive. – tipsywacky Jun 14 '17 at 13:12
  • 1
    +1 for @zwer's answer, but in case you want to speed things up, try this: https://stackoverflow.com/a/13973531/4386191 – Zev Averbach Jun 14 '17 at 13:24
  • @ZevAverbach yours what i'm looking for. Thanks. – tipsywacky Jun 14 '17 at 13:25
  • @suvy i will give it a try too. thanks for point cpulimit out. – tipsywacky Jun 14 '17 at 13:29

1 Answers1

2

If I were you, I'd increase the chunk size and add some breathing room for the underlying I/O thread, so:

import time

response = requests.get(url, stream=True)

with open(local_filename, 'wb') as f:
    for chunk in response.iter_content(chunk_size=1024*1024):  # lets use 1 meg chunks
        if chunk:  
            f.write(chunk)
            f.flush()
        time.sleep(0.05)  # 50ms delay won't kill anyone

If that doesn't help, you have deeper issues on your system / in your code than this piece.

zwer
  • 24,943
  • 3
  • 48
  • 66