8

Tried to write a process-based timeout (sync) on the cheap, like this:

from concurrent.futures import ProcessPoolExecutor

def call_with_timeout(func, *args, timeout=3):
    with ProcessPoolExecutor(max_workers=1) as pool:
        future = pool.submit(func, *args)
        result = future.result(timeout=timeout)

But it seems the timeout argument passed to future.result doesn't really work as advertised.

>>> t0 = time.time()
... call_with_timeout(time.sleep, 2, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
wall time: 2.016767978668213

OK.

>>> t0 = time.time()
... call_with_timeout(time.sleep, 5, timeout=3)
... delta = time.time() - t0
... print('wall time:', delta)
# TimeoutError

Not OK - unblocked after 5 seconds, not 3 seconds.

Related questions show how to do this with thread pools, or with signal. How to timeout a process submitted to a pool after n seconds, without using any _private API of multiprocessing? Hard kill is fine, no need to request a clean shutdown.

wim
  • 338,267
  • 99
  • 616
  • 750

2 Answers2

6

You might want to take a look at pebble.

Its ProcessPool was designed to solve this exact issue: enable timeout and cancellation of running tasks without the need of shutting down the entire pool.

When a future times out or is cancelled, the worker gets actually terminated effectively stopping the execution of the scheduled function.

Timeout:

pool = pebble.ProcessPool(max_workers=1)
future = pool.schedule(func, args=args, timeout=1)
try:
    future.result()
except TimeoutError:
    print("Timeout")

Example:

def call_with_timeout(func, *args, timeout=3):
    pool = pebble.ProcessPool(max_workers=1)
    with pool:
        future = pool.schedule(func, args=args, timeout=timeout)
        return future.result()
wim
  • 338,267
  • 99
  • 616
  • 750
noxdafox
  • 14,439
  • 4
  • 33
  • 45
  • I was about to add examples myself. You were indeed faster :) – noxdafox Jan 02 '19 at 19:28
  • Yeah, I actually had something going with pebble already. I was kinda hoping there was some api in stdlib... but +1 anyway – wim Jan 02 '19 at 19:30
  • 4
    I built `pebble` exactly because of that. The stdlib Pool implementations `concurrent.futures` and `multiprocessing` are all a bit too optimistic. – noxdafox Jan 02 '19 at 19:30
  • 2
    Ah, I hadn't realised you're the pebble author. Thanks for your work! – wim Jan 02 '19 at 19:32
  • Is the timeout killing subprocess behavior only with ProcessPool or even without the ProcessPool as in the second example here: https://pypi.org/project/Pebble/? – Saravanan Setty May 29 '20 at 21:15
  • Adding another comment since I can't edit existing one. I am using pebble's ProcessPool but it looks like the processes which timeout never get killed and stay in background forever. Is this expected behavior or I'm doing something wrong? – Saravanan Setty May 29 '20 at 21:49
  • For anyone looking for a complete usage example of Pebble, I found the [first one in the readthedocs](https://pebble.readthedocs.io/en/latest/#pools) to be more helpful than the one on PyPI. – dmahr Jan 29 '21 at 14:43
3

The timeout is behaving as it should. future.result(timeout=timeout) stops after the given timeout. Shutting down the pool still waits for all pending futures to finish executing, which causes the unexpected delay.

You can make the shutdown happen in the background by calling shutdown(wait=False), but the overall Python program won't end until all pending futures finish executing anyway:

def call_with_timeout(func, *args, timeout=3):
    pool = ProcessPoolExecutor(max_workers=1)
    try:
        future = pool.submit(func, *args)
        result = future.result(timeout=timeout)
    finally:
        pool.shutdown(wait=False)

The Executor API offers no way to cancel a call that's already executing. future.cancel() can only cancel calls that haven't started yet. If you want abrupt abort functionality, you should probably use something other than concurrent.futures.ProcessPoolExecutor.

user2357112
  • 260,549
  • 28
  • 431
  • 505
  • Yes, but I don't want to wait for pending futures to finish executing. Just want them killed (which is why using a subprocess and not a worker thread in the first place). – wim Jan 02 '19 at 19:11
  • @wim: Answer expanded. – user2357112 Jan 02 '19 at 19:12
  • So is the answer essentially "there is no high-level API to do it"? Perhaps this is because `concurrent.futures`/`multiprocessing` must also work on Windows where SIGKILL is not necessarily available... – wim Jan 02 '19 at 19:15
  • @wim Would this work? https://stackoverflow.com/questions/29494001/how-can-i-abort-a-task-in-a-multiprocessing-pool-after-a-timeout – darkgbm Mar 18 '23 at 00:23