0

Right now I am trying to execute asynchronous requests without any related tie-in to each other, similar to how FTP can upload / download more than one file at once.

I am using the following code:

rec = reuests.get("https://url", stream=True)

With

rec.raw.read()

To get responses.

But I am wishing to be able to execute this same piece of code much faster with no need to wait for the server to respond, which takes about 2 seconds each time.

Jack Hales
  • 1,574
  • 23
  • 51

1 Answers1

2

The easiest way to do something like that is to use threads.

Here is a rough example of one of the ways you might do this.

import requests
from multiprocessing.dummy import Pool  # the exact import depends on your python version

pool = Pool(4)  # the number represents how many jobs you want to run in parallel.

def get_url(url):
    rec = requests.get(url, stream=True)
    return rec.raw.read()

for result in pool.map(get_url, ["http://url/1", "http://url/2"]:
    do_things(result)
Shadow
  • 8,749
  • 4
  • 47
  • 57
  • 1
    Wouldn’t using the grequests module be an easier way to do asynchronous get requests? From my experience, it has been much faster. – Jack Moody Nov 07 '18 at 02:04
  • 1
    As mentioned - this is only one of the ways you could do it. There are plenty of others, `threading` module included. – Shadow Nov 07 '18 at 02:22