383

Here's the Python code to run an arbitrary command returning its stdout data, or raise an exception on non-zero exit codes:

proc = subprocess.Popen(
    cmd,
    stderr=subprocess.STDOUT,  # Merge stdout and stderr
    stdout=subprocess.PIPE,
    shell=True)

communicate is used to wait for the process to exit:

stdoutdata, stderrdata = proc.communicate()

The subprocess module does not support timeout--ability to kill a process running for more than X number of seconds--therefore, communicate may take forever to run.

What is the simplest way to implement timeouts in a Python program meant to run on Windows and Linux?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Sridhar Ratnakumar
  • 81,433
  • 63
  • 146
  • 187

32 Answers32

230

In Python 3.3+:

from subprocess import STDOUT, check_output

output = check_output(cmd, stderr=STDOUT, timeout=seconds)

output is a byte string that contains command's merged stdout, stderr data.

check_output raises CalledProcessError on non-zero exit status as specified in the question's text unlike proc.communicate() method.

I've removed shell=True because it is often used unnecessarily. You can always add it back if cmd indeed requires it. If you add shell=True i.e., if the child process spawns its own descendants; check_output() can return much later than the timeout indicates, see Subprocess timeout failure.

The timeout feature is available on Python 2.x via the subprocess32 backport of the 3.2+ subprocess module.

Josep
  • 676
  • 2
  • 8
  • 14
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • 18
    Indeed, and subprocess timeout support exists in the subprocess32 backport that I maintain for use on Python 2. http://pypi.python.org/pypi/subprocess32/ – gps Dec 09 '12 at 04:07
  • 10
    @gps Sridhar asked for cross platform solution, while your backport only supports POSIX : when I tried it out, MSVC complained (expected) about missing unistd.h :) – Shmil The Cat Mar 12 '13 at 13:43
  • If you don't need the output, you can just use the subprocess.call. – Kyle Gibson Mar 28 '13 at 16:46
  • 1
    Since Python3.5, use subprocess.run() with capture_output=True and use the encoding parameter to get usefoul output. – MKesper May 28 '20 at 09:33
  • 3
    @MKesper: 1- `check_output()` is the preferred way to get output (it returns the output directly, doesn't ignore errors, it is available since forever). 2- `run()` is more flexible but `run()` ignores error by default and requires additional steps to get output 3- [`check_output()` is implemented in terms of `run()`](https://github.com/python/cpython/blob/8bd216dfede9cb2d5bedb67f20a30c99844dbfb8/Lib/subprocess.py#L377-L421) and therefore it accepts most of the same arguments. 4- nit: `capture_output` is available since 3.7, not 3.5 – jfs May 29 '20 at 19:26
  • I never liked check_output as it forces you to mix stderr into stdout (it returns only subprocess.run(arguments).stdout. – MKesper Jun 08 '20 at 13:19
216

I don't know much about the low level details; but, given that in python 2.6 the API offers the ability to wait for threads and terminate processes, what about running the process in a separate thread?

import subprocess, threading

class Command(object):
    def __init__(self, cmd):
        self.cmd = cmd
        self.process = None

    def run(self, timeout):
        def target():
            print 'Thread started'
            self.process = subprocess.Popen(self.cmd, shell=True)
            self.process.communicate()
            print 'Thread finished'

        thread = threading.Thread(target=target)
        thread.start()

        thread.join(timeout)
        if thread.is_alive():
            print 'Terminating process'
            self.process.terminate()
            thread.join()
        print self.process.returncode

command = Command("echo 'Process started'; sleep 2; echo 'Process finished'")
command.run(timeout=3)
command.run(timeout=1)

The output of this snippet in my machine is:

Thread started
Process started
Process finished
Thread finished
0
Thread started
Process started
Terminating process
Thread finished
-15

where it can be seen that, in the first execution, the process finished correctly (return code 0), while the in the second one the process was terminated (return code -15).

I haven't tested in windows; but, aside from updating the example command, I think it should work since I haven't found in the documentation anything that says that thread.join or process.terminate is not supported.

jcollado
  • 39,419
  • 8
  • 102
  • 133
  • 18
    +1 For being platform independent. I've run this on both linux and windows 7 (cygwin and plain windows python) -- works as expected in all three cases. – phooji Feb 17 '11 at 00:27
  • 7
    I've modified your code a bit in order to be able to pass native Popen kwargs and put it on gist. It is now ready to use multi purpose; https://gist.github.com/1306188 – kirpit Nov 09 '11 at 13:07
  • I'm using this class within another thread, the problem is that, when the external command hangups and python tries to call process.terminate, the process variable is set to None. I looked for the situation where Popen return s None, but I couldn't found any information. – Roberto Mar 19 '12 at 08:30
  • Not working on Windows7. The script stopped but not the command. e.g. "ping www.redicecn.com -t" – redice Sep 06 '12 at 10:59
  • 2
    For anybody having the problem @redice was having, [this question](http://stackoverflow.com/questions/4789837/how-to-terminate-a-python-subprocess-launched-with-shell-true) may help. In short, if you use shell=True, the shell becomes the child process which gets killed, and its command (child of the child process) lives on. – Anson Mar 19 '13 at 00:11
  • 6
    This answer does not provide the same functionality of the original since it doesn't return stdout. – stephenbez Dec 17 '13 at 16:39
  • 1
    also, this doesn't work when a process stalls on terminate (e.g. ffmpeg in some situations). it'll stay forever in 'Terminating process'. for this specific case you have to issue .terminate() twice, but that's not a general solution. i'm still looking for one – ierdna Apr 10 '14 at 20:51
  • 2
    thread.is_alive can lead to a race condition. See http://www.ostricher.com/2015/01/python-subprocess-with-timeout/ – ChaimKut May 07 '15 at 12:56
  • @ChaimKut: you could avoid the race condition by calling [`p.kill()` unconditionally (EAFP vs. LBYL)](http://stackoverflow.com/a/33465356/4279) – jfs Nov 01 '15 at 18:12
  • 1
    This basically works but it is missing the code to kill the process tree which is required in the case of shell=True (the shell becomes a child process). To answer Anson's questions I will post that solution further down in this thread. – Tomas Aug 23 '16 at 18:43
  • **Windows cygwin** may need more stopping power. Instead of self.process.terminate(), [this solution](http://stackoverflow.com/a/17614872/673991) worked: `subprocess.Popen("TASKKILL /F /PID {pid} /T".format(pid=self.process.pid))` – Bob Stein Mar 31 '17 at 02:41
166

jcollado's answer can be simplified using the threading.Timer class:

import shlex
from subprocess import Popen, PIPE
from threading import Timer

def run(cmd, timeout_sec):
    proc = Popen(shlex.split(cmd), stdout=PIPE, stderr=PIPE)
    timer = Timer(timeout_sec, proc.kill)
    try:
        timer.start()
        stdout, stderr = proc.communicate()
    finally:
        timer.cancel()

# Examples: both take 1 second
run("sleep 1", 5)  # process ends normally at 1 second
run("sleep 5", 1)  # timeout happens at 1 second
Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
sussudio
  • 81
  • 1
  • 3
  • 2
  • 15
    +1 for simple portable solution. You don't need `lambda`: `t = Timer(timeout, proc.kill)` – jfs Apr 05 '14 at 21:43
  • 4
    +1 This should be the accepted answer, because it doesn't require the way in which the process is launched to be changed. – Dave Branton May 28 '15 at 22:18
  • 1
    Why does it require the lambda? Couldn't the bound method p.kill be used without the lambda there? – Danny Staple Aug 05 '15 at 16:10
  • // , Would you be willing to include an example of the use of this? – Nathan Basanese Sep 02 '15 at 00:27
  • jcollado's answer works for me, but this one not, on Win7 – Swing May 27 '16 at 03:13
  • This works for me, but I still get the usual Windows behavior of only killing the spawned parent process and not its children. But that's separate. – sfink Jun 23 '16 at 18:00
  • It doesn't work when shell=True is used in Popen, could someone explain please ? – arunkumarreddy Jul 19 '18 at 05:50
  • After trying this exact code with much shorter timers, I found one issue. The "finally" statement that holds the "timer.cancel()" call, needs to be wrapped in its own try/except. On rare cases, I was able to create an Error with very short timers when the Process had been killed just before the execution of the code could run. | finally: try: timer.cancel() except OSError as e: if 'No such process' not in str(e): raise OSError(e) Hmmm. actually, I am still able to trigger it, and for some reason, haven't been able to catch it. researching further,... – PyTis Oct 03 '18 at 04:41
  • How to know if the process ended normally or is timed out ? – tuk Feb 25 '19 at 17:33
  • 2
    @tuk `timer.isAlive()` before `timer.cancel()` means that it ended normally – Charles Apr 28 '20 at 15:51
84

If you're on Unix,

import signal
  ...
class Alarm(Exception):
    pass

def alarm_handler(signum, frame):
    raise Alarm

signal.signal(signal.SIGALRM, alarm_handler)
signal.alarm(5*60)  # 5 minutes
try:
    stdoutdata, stderrdata = proc.communicate()
    signal.alarm(0)  # reset the alarm
except Alarm:
    print "Oops, taking too long!"
    # whatever else
jfs
  • 399,953
  • 195
  • 994
  • 1,670
Alex Martelli
  • 854,459
  • 170
  • 1,222
  • 1,395
  • 3
    Well, I am interested in a cross-platform solution that works at least on win/linux/mac. – Sridhar Ratnakumar Jul 28 '09 at 01:52
  • 1
    I like this unix-based approach. Ideally, one would combine this with a windows-specific approach (using CreateProcess and Jobs) .. but for now, the solution below is simple, easy and works-so-far. – Sridhar Ratnakumar Jul 29 '09 at 19:43
  • 3
    I have added a portable solution, see my answer – flybywire Oct 13 '09 at 08:16
  • 4
    This solution would work _only_if_ signal.signal(signal.SIGALARM, alarm_handler) is called from the main thread. See the documentation for signal – volatilevoid Dec 19 '09 at 05:58
  • Unfortunately, when running (on linux) in the context of an Apache module (like mod_python, mod_perl, or mod_php), I've found the use of signals and alarms to be disallowed (presumably because they interfere with Apache's own IPC logic). So to achieve the goal of timing out a command I have been forced to write "parent loops" which launch a child process and then sit in a "sleep"y loop watching the clock (and possibly also monitoring output from the child). – Peter Jul 29 '11 at 00:54
  • Don't you have to close `stdoutdata` and `stderrdata` in a `finally` clause, unless you have a `subprocess.Popen(..., close_fds=True)` – Ehtesh Choudhury Aug 24 '11 at 00:45
  • I'm curious why Alarm is under Exception rather than under EnvironmentError. It seems that the process stalling is conceptually similar to various OS errors. – Jim Dennis May 31 '13 at 19:54
  • Nice answer. However, be wary that this solution only shows the timeout, without stopping/killing the process. Good thing I had 8 cores available. – yasen Sep 19 '14 at 06:53
44

Here is Alex Martelli's solution as a module with proper process killing. The other approaches do not work because they do not use proc.communicate(). So if you have a process that produces lots of output, it will fill its output buffer and then block until you read something from it.

from os import kill
from signal import alarm, signal, SIGALRM, SIGKILL
from subprocess import PIPE, Popen

def run(args, cwd = None, shell = False, kill_tree = True, timeout = -1, env = None):
    '''
    Run a command with a timeout after which it will be forcibly
    killed.
    '''
    class Alarm(Exception):
        pass
    def alarm_handler(signum, frame):
        raise Alarm
    p = Popen(args, shell = shell, cwd = cwd, stdout = PIPE, stderr = PIPE, env = env)
    if timeout != -1:
        signal(SIGALRM, alarm_handler)
        alarm(timeout)
    try:
        stdout, stderr = p.communicate()
        if timeout != -1:
            alarm(0)
    except Alarm:
        pids = [p.pid]
        if kill_tree:
            pids.extend(get_process_children(p.pid))
        for pid in pids:
            # process might have died before getting to this line
            # so wrap to avoid OSError: no such process
            try: 
                kill(pid, SIGKILL)
            except OSError:
                pass
        return -9, '', ''
    return p.returncode, stdout, stderr

def get_process_children(pid):
    p = Popen('ps --no-headers -o pid --ppid %d' % pid, shell = True,
              stdout = PIPE, stderr = PIPE)
    stdout, stderr = p.communicate()
    return [int(p) for p in stdout.split()]

if __name__ == '__main__':
    print run('find /', shell = True, timeout = 3)
    print run('find', shell = True)
wim
  • 338,267
  • 99
  • 616
  • 750
Björn Lindqvist
  • 19,221
  • 20
  • 87
  • 122
  • 3
    This will not work on windows, plus the order of functions is reversed. – Hamish Grubijan Jan 23 '11 at 18:15
  • 3
    This sometimes results in exception when another handler registers itself on SIGALARM and kills the process before this one gets to "kill", added work-around. BTW, great recipe! I've used this to launch 50,000 buggy processes so far without freezing or crashing the handling wrapper. – Yaroslav Bulatov Jul 01 '11 at 21:02
  • How can this be modified to run in a Threaded application? I am trying to use it from within worker threads and get `ValueError: signal only works in main thread` – wim Aug 03 '11 at 07:18
  • @Yaroslav Bulatov Thanks for the info. What was the workaround you added to deal with the issue mentioned? – jpswain Aug 10 '11 at 15:38
  • 1
    Just added "try;catch" block, it's inside the code. BTW, in the long term, this turned out to give me problems because you can only set one SIGALARM handler, and other processes can reset it. One solution to this is given here -- http://stackoverflow.com/questions/6553423/multiple-subprocesses-with-timeouts – Yaroslav Bulatov Aug 11 '11 at 05:49
34

Since Python 3.5, there's a new subprocess.run universal command (that is meant to replace check_call, check_output ...) and which has the timeout= parameter as well.

subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, shell=False, cwd=None, timeout=None, check=False, encoding=None, errors=None)

Run the command described by args. Wait for command to complete, then return a CompletedProcess instance.

It raises a subprocess.TimeoutExpired exception when the timeout expires.

Boris Verkhovskiy
  • 14,854
  • 11
  • 100
  • 103
Jean-François Fabre
  • 137,073
  • 23
  • 153
  • 219
24

timeout is now supported by call() and communicate() in the subprocess module (as of Python3.3):

import subprocess

subprocess.call("command", timeout=20, shell=True)

This will call the command and raise the exception

subprocess.TimeoutExpired

if the command doesn't finish after 20 seconds.

You can then handle the exception to continue your code, something like:

try:
    subprocess.call("command", timeout=20, shell=True)
except subprocess.TimeoutExpired:
    # insert code here

Hope this helps.

unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
James
  • 1
  • 1
  • 2
  • [there is an existing answer that mentions the `timeout` parameter](http://stackoverflow.com/a/12698328/4279). Though mentioning it once more wouldn't hurt. – jfs Feb 23 '15 at 02:43
  • // , I think OP's looking for a solution for the older Python. – Nathan Basanese Sep 02 '15 at 00:29
20

surprised nobody mentioned using timeout

timeout 5 ping -c 3 somehost

This won't for work for every use case obviously, but if your dealing with a simple script, this is hard to beat.

Also available as gtimeout in coreutils via homebrew for mac users.

Karsten
  • 501
  • 6
  • 12
  • 1
    you mean: `proc = subprocess.Popen(['/usr/bin/timeout', str(timeout)] + cmd, ...)`. Is there `timeout` command on Windows as OP asks? – jfs Apr 21 '15 at 09:18
  • In windows, one can use application like **git bash** which allows bash utilities in Windows. – Kaushik Acharya Jul 06 '19 at 06:04
  • @KaushikAcharya even if you use git bash, when python calls subprocess it will run on Windows, hence this bypass won't work. – Naman Chikara Jul 17 '19 at 17:40
18

I've modified sussudio answer. Now function returns: (returncode, stdout, stderr, timeout) - stdout and stderr is decoded to utf-8 string

def kill_proc(proc, timeout):
  timeout["value"] = True
  proc.kill()

def run(cmd, timeout_sec):
  proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  timeout = {"value": False}
  timer = Timer(timeout_sec, kill_proc, [proc, timeout])
  timer.start()
  stdout, stderr = proc.communicate()
  timer.cancel()
  return proc.returncode, stdout.decode("utf-8"), stderr.decode("utf-8"), timeout["value"]
Chilledrat
  • 2,593
  • 3
  • 28
  • 38
Michal Zmuda
  • 5,381
  • 3
  • 43
  • 39
10

Another option is to write to a temporary file to prevent the stdout blocking instead of needing to poll with communicate(). This worked for me where the other answers did not; for example on windows.

    outFile =  tempfile.SpooledTemporaryFile() 
    errFile =   tempfile.SpooledTemporaryFile() 
    proc = subprocess.Popen(args, stderr=errFile, stdout=outFile, universal_newlines=False)
    wait_remaining_sec = timeout

    while proc.poll() is None and wait_remaining_sec > 0:
        time.sleep(1)
        wait_remaining_sec -= 1

    if wait_remaining_sec <= 0:
        killProc(proc.pid)
        raise ProcessIncompleteError(proc, timeout)

    # read temp streams from start
    outFile.seek(0);
    errFile.seek(0);
    out = outFile.read()
    err = errFile.read()
    outFile.close()
    errFile.close()
Matt
  • 983
  • 2
  • 9
  • 14
10

Prepending the Linux command timeout isn't a bad workaround and it worked for me.

cmd = "timeout 20 "+ cmd
subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = p.communicate()
Vikram Hosakote
  • 3,528
  • 12
  • 23
  • How can i get the out put strings print out during the sub process execution? - Out put messages are returned by sub process. – Ammad May 15 '20 at 23:19
  • `timeout` is not available by default in mac so this is not portable – dux2 Jul 30 '20 at 09:02
6

Here is my solution, I was using Thread and Event:

import subprocess
from threading import Thread, Event

def kill_on_timeout(done, timeout, proc):
    if not done.wait(timeout):
        proc.kill()

def exec_command(command, timeout):

    done = Event()
    proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    watcher = Thread(target=kill_on_timeout, args=(done, timeout, proc))
    watcher.daemon = True
    watcher.start()

    data, stderr = proc.communicate()
    done.set()

    return data, stderr, proc.returncode

In action:

In [2]: exec_command(['sleep', '10'], 5)
Out[2]: ('', '', -9)

In [3]: exec_command(['sleep', '10'], 11)
Out[3]: ('', '', 0)
rsk
  • 1,266
  • 1
  • 13
  • 20
6

I added the solution with threading from jcollado to my Python module easyprocess.

Install:

pip install easyprocess

Example:

from easyprocess import Proc

# shell is not supported!
stdout=Proc('ping localhost').call(timeout=1.5).stdout
print stdout
Sridhar Ratnakumar
  • 81,433
  • 63
  • 146
  • 187
ponty
  • 614
  • 8
  • 8
  • The easyprocess module (http://code.activestate.com/pypm/easyprocess/) worked for me, even using it from multiprocessing. – iChux Feb 28 '14 at 09:55
5

if you are using python 2, give it a try

import subprocess32

try:
    output = subprocess32.check_output(command, shell=True, timeout=3)
except subprocess32.TimeoutExpired as e:
    print e
ThOong Ku
  • 29
  • 1
  • 5
5

The solution I use is to prefix the shell command with timelimit. If the comand takes too long, timelimit will stop it and Popen will have a returncode set by timelimit. If it is > 128, it means timelimit killed the process.

See also python subprocess with timeout and large output (>64K)

Community
  • 1
  • 1
bortzmeyer
  • 34,164
  • 12
  • 67
  • 91
3

I've implemented what I could gather from a few of these. This works in Windows, and since this is a community wiki, I figure I would share my code as well:

class Command(threading.Thread):
    def __init__(self, cmd, outFile, errFile, timeout):
        threading.Thread.__init__(self)
        self.cmd = cmd
        self.process = None
        self.outFile = outFile
        self.errFile = errFile
        self.timed_out = False
        self.timeout = timeout

    def run(self):
        self.process = subprocess.Popen(self.cmd, stdout = self.outFile, \
            stderr = self.errFile)

        while (self.process.poll() is None and self.timeout > 0):
            time.sleep(1)
            self.timeout -= 1

        if not self.timeout > 0:
            self.process.terminate()
            self.timed_out = True
        else:
            self.timed_out = False

Then from another class or file:

        outFile =  tempfile.SpooledTemporaryFile()
        errFile =   tempfile.SpooledTemporaryFile()

        executor = command.Command(c, outFile, errFile, timeout)
        executor.daemon = True
        executor.start()

        executor.join()
        if executor.timed_out:
            out = 'timed out'
        else:
            outFile.seek(0)
            errFile.seek(0)
            out = outFile.read()
            err = errFile.read()

        outFile.close()
        errFile.close()
joslinm
  • 7,845
  • 6
  • 49
  • 72
  • Actually, this probably does not work. The `terminate()` function marks a thread as terminated, but does not actually terminate the thread! I can verify this in *nix, but I don't have a Windows computer to test on. – dotancohen Oct 06 '13 at 09:50
2

Once you understand full process running machinery in *unix, you will easily find simplier solution:

Consider this simple example how to make timeoutable communicate() meth using select.select() (available alsmost everythere on *nix nowadays). This also can be written with epoll/poll/kqueue, but select.select() variant could be a good example for you. And major limitations of select.select() (speed and 1024 max fds) are not applicapable for your task.

This works under *nix, does not create threads, does not uses signals, can be lauched from any thread (not only main), and fast enought to read 250mb/s of data from stdout on my machine (i5 2.3ghz).

There is a problem in join'ing stdout/stderr at the end of communicate. If you have huge program output this could lead to big memory usage. But you can call communicate() several times with smaller timeouts.

class Popen(subprocess.Popen):
    def communicate(self, input=None, timeout=None):
        if timeout is None:
            return subprocess.Popen.communicate(self, input)

        if self.stdin:
            # Flush stdio buffer, this might block if user
            # has been writing to .stdin in an uncontrolled
            # fashion.
            self.stdin.flush()
            if not input:
                self.stdin.close()

        read_set, write_set = [], []
        stdout = stderr = None

        if self.stdin and input:
            write_set.append(self.stdin)
        if self.stdout:
            read_set.append(self.stdout)
            stdout = []
        if self.stderr:
            read_set.append(self.stderr)
            stderr = []

        input_offset = 0
        deadline = time.time() + timeout

        while read_set or write_set:
            try:
                rlist, wlist, xlist = select.select(read_set, write_set, [], max(0, deadline - time.time()))
            except select.error as ex:
                if ex.args[0] == errno.EINTR:
                    continue
                raise

            if not (rlist or wlist):
                # Just break if timeout
                # Since we do not close stdout/stderr/stdin, we can call
                # communicate() several times reading data by smaller pieces.
                break

            if self.stdin in wlist:
                chunk = input[input_offset:input_offset + subprocess._PIPE_BUF]
                try:
                    bytes_written = os.write(self.stdin.fileno(), chunk)
                except OSError as ex:
                    if ex.errno == errno.EPIPE:
                        self.stdin.close()
                        write_set.remove(self.stdin)
                    else:
                        raise
                else:
                    input_offset += bytes_written
                    if input_offset >= len(input):
                        self.stdin.close()
                        write_set.remove(self.stdin)

            # Read stdout / stderr by 1024 bytes
            for fn, tgt in (
                (self.stdout, stdout),
                (self.stderr, stderr),
            ):
                if fn in rlist:
                    data = os.read(fn.fileno(), 1024)
                    if data == '':
                        fn.close()
                        read_set.remove(fn)
                    tgt.append(data)

        if stdout is not None:
            stdout = ''.join(stdout)
        if stderr is not None:
            stderr = ''.join(stderr)

        return (stdout, stderr)
Vadim Fint
  • 875
  • 8
  • 9
2

You can do this using select

import subprocess
from datetime import datetime
from select import select

def call_with_timeout(cmd, timeout):
    started = datetime.now()
    sp = subprocess.Popen(cmd, stdout=subprocess.PIPE)
    while True:
        p = select([sp.stdout], [], [], timeout)
        if p[0]:
            p[0][0].read()
        ret = sp.poll()
        if ret is not None:
            return ret
        if (datetime.now()-started).total_seconds() > timeout:
            sp.kill()
            return None
Matt
  • 3,483
  • 4
  • 36
  • 46
dspeyer
  • 2,904
  • 1
  • 18
  • 24
2

python 2.7

import time
import subprocess

def run_command(cmd, timeout=0):
    start_time = time.time()
    df = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    while timeout and df.poll() == None:
        if time.time()-start_time >= timeout:
            df.kill()
            return -1, ""
    output = '\n'.join(df.communicate()).strip()
    return df.returncode, output
jowt
  • 1
  • 2
2

Example of captured output after timeout tested in Python 3.7.8:

try:
    return subprocess.run(command, shell=True, capture_output=True, timeout=20, cwd=cwd, universal_newlines=True)
except subprocess.TimeoutExpired as e:
    print(e.output.decode(encoding="utf-8", errors="ignore"))
    assert False;

The exception subprocess.TimeoutExpired has the output and other members:

cmd - Command that was used to spawn the child process.

timeout - Timeout in seconds.

output - Output of the child process if it was captured by run() or check_output(). Otherwise, None.

stdout - Alias for output, for symmetry with stderr.

stderr - Stderr output of the child process if it was captured by run(). Otherwise, None.

More info: https://docs.python.org/3/library/subprocess.html#subprocess.TimeoutExpired

Neaţu Ovidiu Gabriel
  • 833
  • 3
  • 10
  • 20
2

Late answer and for Linux only, but in case someone wants to use subprocess.getstatusoutput(), where the timeout argument isn't available, you can use the built-in Linux timeout on the beginning of the command, i.e.:

import subprocess

timeout = 25 # seconds
cmd = f"timeout --preserve-status --foreground {timeout} ping duckgo.com"
exit_c, out = subprocess.getstatusoutput(cmd)

if (exit_c == 0):
    print("success")
else:
    print("Error: ", out)

timeout Arguments:

Pedro Lobito
  • 94,083
  • 31
  • 258
  • 268
1

I've used killableprocess successfully on Windows, Linux and Mac. If you are using Cygwin Python, you'll need OSAF's version of killableprocess because otherwise native Windows processes won't get killed.

Heikki Toivonen
  • 30,964
  • 11
  • 42
  • 44
1

This solution kills the process tree in case of shell=True, passes parameters to the process (or not), has a timeout and gets the stdout, stderr and process output of the call back (it uses psutil for the kill_proc_tree). This was based on several solutions posted in SO including jcollado's. Posting in response to comments by Anson and jradice in jcollado's answer. Tested in Windows Srvr 2012 and Ubuntu 14.04. Please note that for Ubuntu you need to change the parent.children(...) call to parent.get_children(...).

def kill_proc_tree(pid, including_parent=True):
  parent = psutil.Process(pid)
  children = parent.children(recursive=True)
  for child in children:
    child.kill()
  psutil.wait_procs(children, timeout=5)
  if including_parent:
    parent.kill()
    parent.wait(5)

def run_with_timeout(cmd, current_dir, cmd_parms, timeout):
  def target():
    process = subprocess.Popen(cmd, cwd=current_dir, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)

    # wait for the process to terminate
    if (cmd_parms == ""):
      out, err = process.communicate()
    else:
      out, err = process.communicate(cmd_parms)
    errcode = process.returncode

  thread = Thread(target=target)
  thread.start()

  thread.join(timeout)
  if thread.is_alive():
    me = os.getpid()
    kill_proc_tree(me, including_parent=False)
    thread.join()
Tomas
  • 944
  • 8
  • 10
1

There's an idea to subclass the Popen class and extend it with some simple method decorators. Let's call it ExpirablePopen.

from logging import error
from subprocess import Popen
from threading import Event
from threading import Thread


class ExpirablePopen(Popen):

    def __init__(self, *args, **kwargs):
        self.timeout = kwargs.pop('timeout', 0)
        self.timer = None
        self.done = Event()

        Popen.__init__(self, *args, **kwargs)

    def __tkill(self):
        timeout = self.timeout
        if not self.done.wait(timeout):
            error('Terminating process {} by timeout of {} secs.'.format(self.pid, timeout))
            self.kill()

    def expirable(func):
        def wrapper(self, *args, **kwargs):
            # zero timeout means call of parent method
            if self.timeout == 0:
                return func(self, *args, **kwargs)

            # if timer is None, need to start it
            if self.timer is None:
                self.timer = thr = Thread(target=self.__tkill)
                thr.daemon = True
                thr.start()

            result = func(self, *args, **kwargs)
            self.done.set()

            return result
        return wrapper

    wait = expirable(Popen.wait)
    communicate = expirable(Popen.communicate)


if __name__ == '__main__':
    from subprocess import PIPE

    print ExpirablePopen('ssh -T git@bitbucket.org', stdout=PIPE, timeout=1).communicate()
1

I had the problem that I wanted to terminate a multithreading subprocess if it took longer than a given timeout length. I wanted to set a timeout in Popen(), but it did not work. Then, I realized that Popen().wait() is equal to call() and so I had the idea to set a timeout within the .wait(timeout=xxx) method, which finally worked. Thus, I solved it this way:

import os
import sys
import signal
import subprocess
from multiprocessing import Pool

cores_for_parallelization = 4
timeout_time = 15  # seconds

def main():
    jobs = [...YOUR_JOB_LIST...]
    with Pool(cores_for_parallelization) as p:
        p.map(run_parallel_jobs, jobs)

def run_parallel_jobs(args):
    # Define the arguments including the paths
    initial_terminal_command = 'C:\\Python34\\python.exe'  # Python executable
    function_to_start = 'C:\\temp\\xyz.py'  # The multithreading script
    final_list = [initial_terminal_command, function_to_start]
    final_list.extend(args)

    # Start the subprocess and determine the process PID
    subp = subprocess.Popen(final_list)  # starts the process
    pid = subp.pid

    # Wait until the return code returns from the function by considering the timeout. 
    # If not, terminate the process.
    try:
        returncode = subp.wait(timeout=timeout_time)  # should be zero if accomplished
    except subprocess.TimeoutExpired:
        # Distinguish between Linux and Windows and terminate the process if 
        # the timeout has been expired
        if sys.platform == 'linux2':
            os.kill(pid, signal.SIGTERM)
        elif sys.platform == 'win32':
            subp.terminate()

if __name__ == '__main__':
    main()
1

Although I haven't looked at it extensively, this decorator I found at ActiveState seems to be quite useful for this sort of thing. Along with subprocess.Popen(..., close_fds=True), at least I'm ready for shell-scripting in Python.

Ehtesh Choudhury
  • 7,452
  • 5
  • 42
  • 48
0

Unfortunately, I'm bound by very strict policies on the disclosure of source code by my employer, so I can't provide actual code. But for my taste the best solution is to create a subclass overriding Popen.wait() to poll instead of wait indefinitely, and Popen.__init__ to accept a timeout parameter. Once you do that, all the other Popen methods (which call wait) will work as expected, including communicate.

Luke Woodward
  • 63,336
  • 16
  • 89
  • 104
0

https://pypi.python.org/pypi/python-subprocess2 provides extensions to the subprocess module which allow you to wait up to a certain period of time, otherwise terminate.

So, to wait up to 10 seconds for the process to terminate, otherwise kill:

pipe  = subprocess.Popen('...')

timeout =  10

results = pipe.waitOrTerminate(timeout)

This is compatible with both windows and unix. "results" is a dictionary, it contains "returnCode" which is the return of the app (or None if it had to be killed), as well as "actionTaken". which will be "SUBPROCESS2_PROCESS_COMPLETED" if the process completed normally, or a mask of "SUBPROCESS2_PROCESS_TERMINATED" and SUBPROCESS2_PROCESS_KILLED depending on action taken (see documentation for full details)

0

for python 2.6+, use gevent

 from gevent.subprocess import Popen, PIPE, STDOUT

 def call_sys(cmd, timeout):
      p= Popen(cmd, shell=True, stdout=PIPE)
      output, _ = p.communicate(timeout=timeout)
      assert p.returncode == 0, p. returncode
      return output

 call_sys('./t.sh', 2)

 # t.sh example
 sleep 5
 echo done
 exit 1
whi
  • 2,685
  • 6
  • 33
  • 40
0

Sometimes you need to process (ffmpeg) without using communicate() and in this case you need asynchronous timeout, a practical way to do this using ttldict

pip install ttldict

from ttldict import  TTLOrderedDict   
sp_timeout = TTLOrderedDict(default_ttl=10)

def kill_on_timeout(done, proc):
    while True:
        now = time.time()
        if sp_timeout.get('exp_time') == None:
                proc.kill()
                break
    
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True, stderr=subprocess.STDOUT)
            
sp_timeout['exp_time'] = time.time()
            
done = Event()
watcher = Thread(target=kill_on_timeout, args=(done, process))
watcher.daemon = True
watcher.start()
done.set()

for line in process.stdout:
.......
SweetNGX
  • 153
  • 1
  • 9
0

Subprocess Popen.communicate now has a timeout option:

If the process does not terminate after timeout seconds, a TimeoutExpired exception will be raised. Catching this exception and retrying communication will not lose any output. The child process is not killed if the timeout expires, so in order to cleanup properly a well-behaved application should kill the child process and finish communication

proc = subprocess.Popen(...)
try:
    outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
    proc.kill()
    outs, errs = proc.communicate()

You can take a look at the docs.

-3

Was just trying to write something simpler.

#!/usr/bin/python

from subprocess import Popen, PIPE
import datetime
import time 

popen = Popen(["/bin/sleep", "10"]);
pid = popen.pid
sttime = time.time();
waittime =  3

print "Start time %s"%(sttime)

while True:
    popen.poll();
    time.sleep(1)
    rcode = popen.returncode
    now = time.time();
    if [ rcode is None ]  and  [ now > (sttime + waittime) ] :
        print "Killing it now"
        popen.kill()
Jabir Ahmed
  • 82
  • 1
  • 1
  • 5
  • time.sleep(1) is very bad idea - imagine you want to run many commands that would take about 0.002sec. You should rather wait while poll() (see select, for Linux epol recomended :) – ddzialak May 09 '14 at 21:03