-2

I want to run an exe program in the background
Let's say the program is httpd.exe
I can run it but when I want to get the outupt It get stuck becuase there is no output if It starts successfully. But if there is an error It's OK.

Here is the code I'm using:

import asyncio
import os

os.chdir('c:\\apache\\bin')
process, stdout, stderr = asyncio.run(run('httpd.exe'))
print(stdout, stderr)

async def run(cmd):
    proc = await asyncio.create_subprocess_exec(
        cmd,
        stdout=asyncio.subprocess.PIPE,
        stderr=asyncio.subprocess.PIPE)

    stdout, stderr = await proc.communicate()

    return (proc, stdout, stderr)
Rez
  • 514
  • 1
  • 9
  • 31

1 Answers1

4

I tried to make the following code as general as possible:

  1. I make no assumptions as to whether the program being run only writes its output to stdout alone or stderr alone. So I capture both outputs by starting two threads, one for each stream, and then write the output to a common queue that can be read in real time. When end-of-stream in encountered on each stdout and stderr, the threads write a special None record to the queue to indicate end of stream. So the reader of the queue know that after seeing two such "end of stream" indicators that there will be no more lines being written to the queue and that the process has effectively ended.
  2. The call to subprocess.Popen can be made with argument shell=True so that this can also built-in shell commands and also to make the specification of the command easier (it can now be a single string rather than a list of strings).
  3. The function run_cmd returns the created process and the queue. You just have to now loop reading lines from the queue until two None records are seen. Once that occurs, you can then just wait for the process to complete, which should be immediate.
  4. If you know that the process you are starting only writes its output to stdout or stderr (or if you only want to catch one of these outputs), then you can modify the program to start only one thread and specify the subprocess.PIPE value for only one of these outputs and then the loop that is reading lines from the queue should only be looking for one None end-of-stream indicator.
  5. The threads are daemon threads so that if you wish to terminate based on output from the process that has been read before all the end-of-stream records have been detected, then the threads will automatically be terminated along with the main process.
  6. run_apache, which runs Apache as a subprocess, is itself a daemon thread. If it detects any output from Apache, it sets an event that has been passed to it. The main thread that starts run_apache can periodically test this event, wait on this event, wait for the run_apache thread to end (which will only occur when Apache ends) or can terminate Apache via global variable proc.
import subprocess
import sys
import threading
import queue


def read_stream(f, q):
    for line in iter(f.readline, ''):
        q.put(line)
    q.put(None) # show no more data from stdout or stderr

def run_cmd(command, run_in_shell=True):
    """
    Run command as a subprocess. If run_in_shell is True, then
    command is a string, else it is a list of strings.
    """
    proc = subprocess.Popen(command, shell=run_in_shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
    q = queue.Queue()
    threading.Thread(target=read_stream, args=(proc.stdout, q), daemon=True).start()
    threading.Thread(target=read_stream, args=(proc.stderr, q), daemon=True).start()
    return proc, q

import os

def run_apache(event):
    global proc

    os.chdir('c:\\apache\\bin')
    proc, q = run_cmd(['httpd.exe'], False)
    seen_None_count = 0
    while seen_None_count < 2:
        line = q.get()
        if line is None:
            # end of stream from either stdout or stderr
            seen_None_count += 1
        else:
            event.set() # Seen output line:
            print(line, end='')
    # wait for process to terminate, which should be immediate:
    proc.wait()

# This event will be set if Apache write output:
event = threading.Event()
t = threading.Thread(target=run_apache, args=(event,), daemon=True)
t.start()
# Main thread runs and can test event any time to see if it has done any output:
if event.is_set():
    ...
# The main thread can wait for run_apache thread to normally terminate,
# will occur when Apache terminates:
t.join()
# or the main thread can kill Apache via global variable procL
proc.terminate() # No need to do t.join() since run_apache is a daemon thread
Booboo
  • 38,656
  • 3
  • 37
  • 60
  • There is a problem, The program has no output if it's runs successfully. so the `proc.wait()` make the whole app to freeze – Rez Apr 18 '22 at 02:09
  • you shouldn't use "shell=True" as argument, see this answer:https://stackoverflow.com/a/3172488/13285707 instead use the ".split()" method to separate the command – XxJames07- Apr 18 '22 at 09:18
  • @XxJames07- As I said, I was trying to make the code as general as possible. So I specified *shell=True* in the case that the user wants to run a command that is built-in to the "shell", such as the *dir* command. – Booboo Apr 18 '22 at 09:42
  • @XxJames07- I have modified the code so specifying *shell=True* is an option according to the command being executed. – Booboo Apr 18 '22 at 10:18
  • @RezaT1994 So let's suppose you don't issue the `proc.wait()` call. Then Apache is running fine and the threads that are running and reading from stdout and stderr will continue to run and never see the end of file on stdout or stderr and therefore never write the end-of-stream indicators to the queue. How would you propose then for the main program to finally decide that there is no problem? You can't detect that there is no problem this way, only if there is one!!! I have updated the answer and code. See new point 6. – Booboo Apr 18 '22 at 10:19