19

I need to make asynchronous requests using the Requests library. In Python 3.7 if I try from requests import async I get SyntaxError: invalid syntax.

async has become a reserved with in Python 3.7. How to I get around this situation?

Gigaflop
  • 390
  • 1
  • 13
sigmus
  • 2,987
  • 3
  • 23
  • 31
  • 1
    There are several ways to import. `importlib.import_module(".async", "requests")`. But actually I also get `ModuleNotFoundError: No module named 'requests.async'`. – Sraw Aug 03 '18 at 15:14
  • 1
    The problem also happens with Python 3.6. Apparently ```requests``` let go of the ```async``` module altogether a long time ago but the docs are not very clear about that. – sigmus Aug 06 '18 at 10:10
  • Yep, I think so. You can try `aiohttp` or `grequests`. – Sraw Aug 06 '18 at 10:12

3 Answers3

28

Lukasa who is with the requests lib said:

At the current time there are no plans to support async and await. This is not because they aren't a good idea: they are. It's because to use them requires quite substantial code changes. Right now requests is a purely synchronous library that, at the bottom of its stack, uses httplib to send and receive data. We cannot move to an async model unless we replace httplib. The best we could do is provide a shorthand to run a request in a thread, but asyncio already has just such a shorthand, so I don't believe it would be valuable. Right now I am quietly looking at whether we can rewrite requests to work just as well in a synchronous environment as in an async one. However, the reality is that doing so will be a lot of work, involving rewriting a lot of our stack, and may not happen for many years, if ever.

But don't worry aiohttp is very similar to requests.

Here's an example.

import aiohttp
import asyncio

async def fetch(session, url):
    async with session.get(url) as response:
        return await response.text()

async def main():
    async with aiohttp.ClientSession() as session:
        html = await fetch(session, 'http://python.org')
        print(html)

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
spedy
  • 2,200
  • 25
  • 26
  • 3
    upvoted! more examples can be found in the following link: https://stackoverflow.com/questions/22190403/how-could-i-use-requests-in-asyncio – pangyuteng Mar 06 '19 at 23:13
5

You can use asyncio to make asynchronous requests. Here is an example:

import asyncio
import requests

async def main():
    loop = asyncio.get_event_loop()
    futures = [
        loop.run_in_executor(
            None, 
            requests.get, 
            'http://example.org/'
        )
        for i in range(20)
    ]
    for response in await asyncio.gather(*futures):
        pass

loop = asyncio.get_event_loop()
loop.run_until_complete(main())
zbys
  • 453
  • 2
  • 8
  • This is running sync method in thread pool. Definitely cannot take the advantages of async IO. – Sraw Aug 03 '18 at 14:59
  • So is this async in nature or blocking? – sigmus Aug 06 '18 at 10:11
  • it's an async function – zbys Aug 06 '18 at 13:06
  • 1
    Is there a way to limit the number of simultaneous connections at once? If I need 100K requests but have to limit to 10 at a time, how can that be done? – user1717828 Oct 21 '19 at 14:34
  • @user1717828 divide it to chunks and use "run_until_complete" on each chunk – Nir O. Jul 15 '21 at 12:34
  • @NirO. I don't understand. You mean chunks of 10? Wouldn't that mean when one finishes, we have to wait for the other 9 to finish before picking up a fresh one? – user1717828 Jul 16 '21 at 13:30
  • where in your original question did you present a requirement for all threads to be working continuously ? – Nir O. Jul 19 '21 at 08:07
  • anyway to achieve a non-continuous operation, each thread will not take its input from 'run_in_executor' nor returns its output to 'response'. instead, it will take the input from a common list and put the output in another common list. each thread will run in loop. you will then have further responsibility not to block the entire app. – Nir O. Jul 22 '21 at 11:35
1

You could use hyper-requests(https://github.com/edjones84/hyper-requests), which allows you to pass in a list of url's and parameters to be run asynchronously, like so:

import hyper_requests

# Define the request parameters
params = [
    {'url': 'http://httpbin.org/get' , 'data': 'value1'},
    {'url': 'http://httpbin.org/get' , 'data': 'value3'},
    {'url': 'http://httpbin.org/get' , 'data': 'value5'},
    {'url': 'http://httpbin.org/get' , 'data': 'value7'},
    {'url': 'http://httpbin.org/get' , 'data': 'value9'}
]

# Create an instance of AsyncRequests and execute the requests
returned_data = hyper_requests.get(request_params=params, workers=10)

# Process the returned data
for response in returned_data:
    print(response)
  • Your answer did not follow the author's needs. They REQUIRED using the `requests` library. The `requests_async` one is made by the same creator with the same API so they're different. – jett8998 May 24 '23 at 12:32