8

If need to periodically check the stdout of a running process. For example, the process is tail -f /tmp/file, which is spawned in the python script. Then every x seconds, the stdout of that subprocess is written to a string and further processed. The subprocess is eventually stopped by the script.

To parse the stdout of a subprocess, if used check_output until now, which doesn't seem to work, as the process is still running and doesn't produce a definite output.

>>> from subprocess import check_output
>>> out = check_output(["tail", "-f", "/tmp/file"])
 #(waiting for tail to finish)

It should be possible to use threads for the subprocesses, so that the output of multiple subprocesses may be processed (e.g. tail -f /tmp/file1, tail -f /tmp/file2).

How can I start a subprocess, periodically check and process its stdout and eventually stop the subprocess in a multithreading friendly way? The python script runs on a Linux system.

The goal is not to continuously read a file, the tail command is an example, as it behaves exactly like the actual command used.

edit: I didn't think this through, the file did not exist. check_output now simply waits for the process to finish.

edit2: An alternative method, with Popen and PIPE appears to result in the same issue. It waits for tail to finish.

>>> from subprocess import Popen, PIPE, STDOUT
>>> cmd = 'tail -f /tmp/file'
>>> p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
>>> output = p.stdout.read()
 #(waiting for tail to finish)
boolean.is.null
  • 831
  • 2
  • 12
  • 19
  • http://stackoverflow.com/a/6482200/1866177 probably solves this problem. – Dschoni Mar 02 '17 at 11:02
  • 1
    Your example has a bigger problem than being unable to read stdout. Please fix it. – Mad Physicist Mar 02 '17 at 11:13
  • @Dschoni, OP is trying to ingest, not redirect the output. This makes it a little more completed than the link you provided. – Mad Physicist Mar 02 '17 at 11:15
  • I didn't flag it as duplicate. Just wanted to hint the OP to stdout redirection. – Dschoni Mar 02 '17 at 11:18
  • 2
    You should use the full form of `Popen` since the process will run continuously, latch on to the stdout pipe and just use a `for` loop on the file object (since it's a blocking iterator). As you mentioned, the reading loop may need to run on a separate thread if your main program needs to do something besides respond to changes in the file. – Mad Physicist Mar 02 '17 at 11:20
  • 1
    I will post a full answer when you fix your example with an actual attempt. – Mad Physicist Mar 02 '17 at 11:21
  • 1
    @MadPhysicist, I fixed my example and added a second approach. I believe I need a simple non-blocking method to read the stdout of the process. – boolean.is.null Mar 02 '17 at 13:13

1 Answers1

17

Your second attempt is 90% correct. The only issue is that you are attempting to read all of tail's stdout at the same time once it's finished. However, tail is intended to run (indefinitely?) in the background, so you really want to read stdout from it line-by-line:

from subprocess import Popen, PIPE, STDOUT
p = Popen(["tail", "-f", "/tmp/file"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
for line in p.stdout:
    print(line)

I have removed the shell=True and close_fds=True arguments. The first is unnecessary and potentially dangerous, while the second is just the default.

Remember that file objects are iterable over their lines in Python. The for loop will run until tail dies, but it will process each line as it appears, as opposed to read, which will block until tail dies.

If I create an empty file in /tmp/file, start this program and begin echoing lines into the file using another shell, the program will echo those lines. You should probably replace print with something a bit more useful.

Here is an example of commands I typed after starting the code above:

Command line

$ echo a > /tmp/file
$ echo b > /tmp/file
$ echo c >> /tmp/file

Program Output (From Python in a different shell)

b'a\n'
b'tail: /tmp/file: file truncated\n'
b'b\n'
b'c\n'

In the case that you want your main program be responsive while you respond to the output of tail, start the loop in a separate thread. You should make this thread a daemon so that it does not prevent your program from exiting even if tail is not finished. You can have the thread open the sub-process or you can just pass in the standard output to it. I prefer the latter approach since it gives you more control in the main thread:

def deal_with_stdout():
    for line in p.stdout:
        print(line)

from subprocess import Popen, PIPE, STDOUT
from threading import Thread
p = Popen(["tail", "-f", "/tmp/file"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
t = Thread(target=deal_with_stdout, daemon=True)
t.start()
t.join()

The code here is nearly identical, with the addition of a new thread. I added a join() at the end so the program would behave well as an example (join waits for the thread to die before returning). You probably want to replace that with whatever processing code you would normally be running.

If your thread is complex enough, you may also want to inherit from Thread and override the run method instead of passing in a simple target.

Mad Physicist
  • 107,652
  • 25
  • 181
  • 264
  • 1
    Thank you for that well elaborated explanation, this helps me a lot! – boolean.is.null Mar 02 '17 at 14:03
  • 2
    I'm glad it did. Multithreading and multiprocessing is a fairly easy tool once you know how to use it, but pretty hard to jump into. The worst part for me was the terminology. – Mad Physicist Mar 02 '17 at 14:10