1

I am trying to create a function to run a command with timeout, but at the same time, get standard output (stdout) to display. I saw some answers, but not exactly what I am looking for.

Function signature goes:

def run_with_timeout(command, timeout)

So far I am able to get stdout to print at runtime, but I am not sure what's an robust way to timeout the application. Is there a robust way to do the timeout, or is there a better approach to this? I tried process.wait(timeout=timeout) and process.communicate(timeout=timeout), but doesn't seem to work. I am trying to avoid using threads as well...

def run_with_timeout(command, timeout):
    process = subprocess.Popen(
        command,
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT,
        universal_newlines=True,
        text=True,
    )

    output = ''

    try:
        for line in iter(process.stdout.readline, ''):
            output += line
            print(line)

        process.stdout.close()
        process.out = output

    except subprocess.TimeoutExpired as te:
        process.kill()

    return process
YTKme
  • 139
  • 2
  • 9
  • 2
    Is there a particular reason to avoid using threads here? This is a very solid use case for a simple thread that does nothing but delay and send a SIGTERM; I'm hard-pressed to think of a case against that approach. – Charles Duffy Jul 11 '23 at 20:28
  • @CharlesDuffy, mainly because we currently are using threads, but there seem to be a stability issue, so I guess the question would be, how to properly implement the thread in a robust way? – YTKme Jul 11 '23 at 20:47
  • 1
    I think you're real underlying question here may be the issue you're having with the threading module – Matt Jul 11 '23 at 20:59
  • @Matt, i am running this function along with `pytest` in a Jenkins pipeline for an application, say `ping` or `nslookup` to see if they can run for some `timeout` time, but sometimes I get intermediate crashes, but I never seen it happen before using threads, so I am wondering if I am just implementing threading incorrectly and what's the proper way to do it – YTKme Jul 11 '23 at 21:07
  • 2
    Hard to say just from that, I'd open a separate question for it – Matt Jul 11 '23 at 21:09
  • 1
    Really depends on the details. Most of the times I've seen threading cause interpreter crashes involve C modules calling libraries that aren't thread-safe or other corner cases that wouldn't apply here. – Charles Duffy Jul 11 '23 at 21:11
  • 1
    Another option is using `select.select((process.stdout.fileno(),), (), ()), remaining_timeout)` before `process.stdout.readline()`. This works on Unix-like systems (not Microsoft Windows) only. – pts Jul 12 '23 at 00:29
  • make sense, unfortunately i need to get it working on all the platforms – YTKme Jul 12 '23 at 16:01

2 Answers2

1

It's possible to do it on Unix-like systems (e.g. Linux and macOS, but not Microsoft Windows) without threads. You can use preexec_fn= with signal.alarm.

import signal
import subprocess

def run_with_timeout(command, timeout):
    process = subprocess.Popen(
        command,
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT,
        universal_newlines=True,
        text=True,
        preexec_fn=lambda: signal.alarm(timeout),
    )

    output = ''

    try:
        for line in iter(process.stdout.readline, ''):
            output += line
            print(line.rstrip('\n'))

        process.stdout.close()
        process.out = output

    except subprocess.TimeoutExpired as te:
        process.kill()

    return process

print(run_with_timeout(('/bin/sh', '-c', 'while sleep .6; do N=$((N+1)); echo $N; done'), 3))

It works this way:

  • preexec_fn= executes signal.alarm(timeout) in the child process just before running the command.
  • signal.alarm(timeout) asks the kernel to deliver a SIGARLM signal to the (child) process after the specified number of seconds.
  • Thi SIGALRM signal kills the child process by default.
  • When the child process is killed, the write end of the process.stdout pipe is closed, which causes an EOF in the read end, which causes iter(...) to return None, which causes the for line in ... loop to end.

Limitations:

  • It doesn't work on Microsoft Windows, because there is no signal.alarm (neither the kernel mechanism nor the Python function).
  • If the child process creates child processes of its own, those processes won't receive the SIGALRM signal, and they may keep running indefinitely, not closing the write end of the process.stdout pipe.
pts
  • 80,836
  • 20
  • 110
  • 183
0

So I ended up trying to optimize the function using threading, not sure if this is the best approach, but work with both timeout and output to console.

If there's a better approach or method of doing this, please let me know :)

def run_with_timeout(command, timeout):
    # Create the process
    process = subprocess.Popen(
        command,
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT,
        universal_newlines=True,
        text=True,
        )

    # The thread function to capture the output
    def display_output():
        for line in iter(process.stdout.readline, ''):
            print(line.strip())
        process.stdout.close()

    # Create the thread
    t = threading.Thread(target=display_output)

    # Start the thread
    t.start()

    try:
        # Wait for the process to finish or `timeout` is reached
        process.wait(timeout=timeout)
    except subprocess.TimeoutExpired:
        # End the process with `SIGTERM` signal
        process.kill()

    # Wait for the thread to finish
    t.join(timeout=timeout)

    return process
YTKme
  • 139
  • 2
  • 9