47

In Bash, it is possible to execute a command in the background by appending &. How can I do it in Python?

while True:
    data = raw_input('Enter something: ') 
    requests.post(url, data=data) # Don't wait for it to finish.
    print('Sending POST request...') # This should appear immediately.
octosquidopus
  • 3,517
  • 8
  • 35
  • 53
  • 1
    Unlike CPU-bound concurrency issues in Python, this could possibly be resolved with a separate thread, or the use of `multiprocessing.dummy` for a thread pool. – Andrew Gorcester Nov 19 '14 at 17:01

7 Answers7

82

Here's a hacky way to do it:

try:
    requests.get("http://127.0.0.1:8000/test/",timeout=0.0000000001)
except requests.exceptions.ReadTimeout: 
    pass

Edit: for those of you that observed that this will not await a response - that is my understanding of the question "fire and forget... do not wait for it to finish". There are much more thorough and complete ways to do it with threads or async if you need response context, error handling, etc.

keithhackbarth
  • 9,317
  • 5
  • 28
  • 33
  • 3
    You can loss a response this way often. The question was about requests.post and its body is also more fragile with a very short timeout than a simple get. – hynekcer Aug 10 '17 at 16:36
  • 28
    works well when we do not need any response from the api – abggcv Oct 21 '18 at 08:55
  • 2
    When trying this, the server doesn't receive the request. any idea? – Ahmed Nour Jamal El-Din Nov 12 '19 at 13:35
  • try increasing the timeout to 1.0 – Sumant Agnihotri Mar 31 '20 at 06:39
  • 6
    This approach is hacky. It will fail in many cirumstances. – vy32 Jan 06 '21 at 20:10
  • 1
    from the bottom of my heart THANK YOU! this timeout works wonders for my use case. – Cees Aug 26 '22 at 00:42
  • One more thing, this hacky way might only work *locally* and not in a mission-critical system. You could use `threading.Thread` object to start a request and continue on with your code. – Muneeb Ahmad Khurram Oct 05 '22 at 07:16
  • Try it on `http://www.google.com` and you'll get a *ConnectionError*. You can NOT ignore the response code, because many things can go wrong and nobody can ever know! You are trading too much good information with an easy solution! – Keivan Ipchi Hagh Oct 12 '22 at 14:44
  • 1
    I have tested this "solution" with a timeout of 0.1s, while posting messages to microsoft botframework (webchat), **it is simply broken**: some messages never reach the user. An explanation not mentioned in the comments, that I suspect: the connection could be interrupted before the *request* has been fully received by the server, in which case the server wouldn't be able to parse the payload, and fail. Or simply, the server has a way to cancel the action when the connection is closed. DON'T USE THIS. – Arnaud P Nov 30 '22 at 15:09
47

I use multiprocessing.dummy.Pool. I create a singleton thread pool at the module level, and then use pool.apply_async(requests.get, [params]) to launch the task.

This command gives me a future, which I can add to a list with other futures indefinitely until I'd like to collect all or some of the results.

multiprocessing.dummy.Pool is, against all logic and reason, a THREAD pool and not a process pool.

Example (works in both Python 2 and 3, as long as requests is installed):

from multiprocessing.dummy import Pool

import requests

pool = Pool(10) # Creates a pool with ten threads; more threads = more concurrency.
                # "pool" is a module attribute; you can be sure there will only
                # be one of them in your application
                # as modules are cached after initialization.

if __name__ == '__main__':
    futures = []
    for x in range(10):
        futures.append(pool.apply_async(requests.get, ['http://example.com/']))
    # futures is now a list of 10 futures.
    for future in futures:
        print(future.get()) # For each future, wait until the request is
                            # finished and then print the response object.

The requests will be executed concurrently, so running all ten of these requests should take no longer than the longest one. This strategy will only use one CPU core, but that shouldn't be an issue because almost all of the time will be spent waiting for I/O.

Andrew Gorcester
  • 19,595
  • 7
  • 57
  • 73
  • 4
    Your solution looks interesting, but also confusing. What's a future? What's the module level? Could you provide a working example? – octosquidopus Dec 01 '14 at 16:49
  • 1
    @octosquidopus added example to answer – Andrew Gorcester Dec 01 '14 at 19:27
  • 1
    Your example works well, but that is not exactly what I am trying to do. Instead of sending concurrent requests, I would like to send them one at a time, but without blocking the rest of the code. My example should be now be less ambiguous. – octosquidopus Dec 02 '14 at 21:09
  • That's fine, you can do that with my example too. All my example does is keep track of futures in a list, and add a bunch of them at once in a loop. If you remove the loop, and instead of `requests.post(*args, **kwargs)` you use `futures.append(pool.apply_async(requests.post, args, kwargs))` then you can use your model of firing the requests one at a time, but you won't have to wait for them to complete. The requests will run immediately when apply_async is hit, they won't wait until you collect the futures. – Andrew Gorcester Dec 02 '14 at 21:55
  • I finally figured out that the proper formatting of `requests.post(url, data=data`) is `pool.apply_async(s.post, [url, data, data])`. Now, why do I get a "NameError: name 'data' is not defined" error whenever I change the name of the `data` variable? One would think that it wouldn't matter... – octosquidopus Dec 02 '14 at 23:37
  • 2
    I think the formatting should be `pool.apply_async(requests.post, [url], {'data': data})`. The function signature is essentially (function_to_run, list_of_positional_args, dict_of_kwargs). – Andrew Gorcester Dec 03 '14 at 00:20
  • How can I get a `.status_code` from a request sent using `r = pool.apply_async(...)` without putting it into a list? `r.status_code` returns an `AttributeError`. – octosquidopus Dec 07 '14 at 19:51
  • 1
    `r` isn't a response object, it's a future for a response object. You get the real response with `r.get()` -- that produces a response object that's the same as any other. If you only want the status code you could do `r.get().status_code` (note that if the request resulted in an exception, the exception will be raised when you call `get()`). You can also do `response = r.get()` and proceed as normal. If you `r.get()` before the actual asynchronous request is complete, then you will automatically wait until the request is complete before proceeding. – Andrew Gorcester Dec 07 '14 at 20:06
10

Elegant solution from Andrew Gorcester. In addition, without using futures, it is possible to use the callback and error_callback attributes (see doc) in order to perform asynchronous processing:

def on_success(r: Response):
    if r.status_code == 200:
        print(f'Post succeed: {r}')
    else:
        print(f'Post failed: {r}')

def on_error(ex: Exception):
    print(f'Post requests failed: {ex}')

pool.apply_async(requests.post, args=['http://server.host'], kwargs={'json': {'key':'value'}},
                        callback=on_success, error_callback=on_error))
Nemolovich
  • 391
  • 2
  • 12
6

According to the doc, you should move to another library :

Blocking Or Non-Blocking?

With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.

If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks.

Two excellent examples are grequests and requests-futures.

octosquidopus
  • 3,517
  • 8
  • 35
  • 53
Romain Jouin
  • 4,448
  • 3
  • 49
  • 79
4

Simplest and Most Pythonic Solution using threading

A Simple way to go ahead and send POST/GET or to execute any other function without waiting for it to finish is using the built-in Python Module threading.

import threading
import requests

def send_req():
    requests.get("http://127.0.0.1:8000/test/")


for x in range(100):
    threading.Thread(target=send_req).start() # start's a new thread and continues. 

Other Important Features of threading

  • You can turn these threads into daemons using thread_obj.daemon = True

  • You can go ahead and wait for one to complete executing and then continue using thread_obj.join()

  • You can check if a thread is alive using thread_obj.is_alive() bool: True/False

  • You can even check the active thread count as well by threading.active_count()

Official Documentation

1

If you can write the code to be executed separately in a separate python program, here is a possible solution based on subprocessing.

Otherwise you may find useful this question and related answer: the trick is to use the threading library to start a separate thread that will execute the separated task.

A caveat with both approach could be the number of items (that's to say the number of threads) you have to manage. If the items in parent are too many, you may consider halting every batch of items till at least some threads have finished, but I think this kind of management is non-trivial.

For more sophisticated approach you can use an actor based approach, I have not used this library myself but I think it could help in that case.

Community
  • 1
  • 1
Chosmos
  • 596
  • 9
  • 11
0
from multiprocessing.dummy import Pool
import requests

pool = Pool()

def on_success(r):
    print('Post succeed')

def on_error(ex):
    print('Post requests failed')

def call_api(url, data, headers):
    requests.post(url=url, data=data, headers=headers)

def pool_processing_create(url, data, headers):
    pool.apply_async(call_api, args=[url, data, headers], 
    callback=on_success, error_callback=on_error)