2

I am trying to run a command, get it's output, then later run another command in the same environment (say if I set an environment variable in the first command, I want it to be available to the second command). I tried this:

import subprocess

process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE);

process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()

stdout, stderr  = process.communicate()
print "stdout: " + str(stdout)

# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()

stdout, stderr = process.communicate()
print "stdout: " + str(stdout)

but communicate() reads until the end, so this is not a valid technique. (I get this:)

stdout: Test

Traceback (most recent call last):
  File "./MultipleCommands.py", line 15, in <module>
    process.stdin.write("echo $MyVar\n")
ValueError: I/O operation on closed file

I have seen this: https://stackoverflow.com/a/15654218/284529 , but it doesn't give a working example of how to do what it proposes. Can anyone demonstrate how to do this? I have also seen other techniques that involve constantly checking for output in a loop, but this doesn't fit the "get the output of a command" mentality - it is just treating it like a stream.

Community
  • 1
  • 1
David Doria
  • 9,873
  • 17
  • 85
  • 147
  • For the specific example you're talking about, it sounds like you'd be better of using the env kwarg that Popen takes: `Popen("echo $MyVar"], env={"MyVar": "Test"})`. – dano Apr 17 '14 at 19:54
  • @dano You're right. In a real case I might want to start an ssh session, and keep it open and keep issuing commands to it to avoid the overhead of making a new connection for each command. – David Doria Apr 17 '14 at 20:37
  • In that case, I'd recommend using paramiko: https://github.com/paramiko/paramiko – dano Apr 17 '14 at 20:42
  • Could you answer a question for my personal survey: what place in the `subprocess` documentation made you thing that `process.communicate()` may be called more than once for the same process? For interactive usage, `pexpect` is more suitable than `subprocess`. If you want to run commands over ssh; consider using `fabric` as a library (it is more high-level than `paramiko`). – jfs Apr 17 '14 at 23:02
  • Nothing made me explicitly think that it was ok, it just seemed like the only thing I could find in subprocess that "got the output of a command", versus just read indefinitely from a pipe. I'll look into 'fabric', thanks. – David Doria Apr 18 '14 at 11:27

4 Answers4

2

To get the output of multiple commands, just combine them into a single script:

#!/usr/bin/env python
import subprocess
import sys

output = subprocess.check_output("""
export MyVar="Test"
echo $MyVar
echo ${MyVar/est/ick}
""", shell=True, executable='/bin/bash', universal_newlines=True)
sys.stdout.write(output)

Output

Test
Tick
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • Sebastian But then you don't know which part of the output corresponds to each command. – David Doria Apr 21 '14 at 11:56
  • @DavidDoria: But using p.stdin/p.stdout directly doesn't provide this info also (how do you how many lines should you read for a given command?) – jfs Apr 21 '14 at 12:47
1

communicate and wait methods of Popen objects, close the PIPE after the process returns. If you want stay in communication with the process try something like this:

import subprocess

process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE,       stdout=subprocess.PIPE, stderr=subprocess.PIPE);

process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()

process.stdout.readline()

process.stdin.write("echo $MyVar\n")
process.stdin.flush()

stdout, stderr = process.communicate()
print "stdout: " + str(stdout)

I think you misunderstand communicate...

Take a look over this link:- http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate

communicate sends a string to the other process and then waits on it to finish... (Like you said waits for the EOF listening to the stdout & stderror)

What you should do instead is:

proc.stdin.write('message')

# ...figure out how long or why you need to wait...

proc.stdin.write('message2')

(and if you need to get the stdout or stderr you'd use proc.stdout or proc.stderr)

Nehal J Wani
  • 16,071
  • 3
  • 64
  • 89
Sheesh Mohsin
  • 1,455
  • 11
  • 28
1

When using communicate it sees that subprocess had ended, but in case you have a intermediate one (bash), when your sub-subprocess ends, you have to somehow signal manually.

As for the rest, a simplest approach is to just emit a marker line. However, I'm sorry to disappoint you here but pooling (i.e. constantly checking in a loop) is actually the only sane option. If you don't like the loop, you could "hide" it away in a function.

import subprocess
import time

def readlines_upto(stream, until="### DONE ###"):
    while True:
        line = stream.readline()
        if line is None:
            time.sleep(0.1)
            continue
        if line.rstrip() == until:
            break
        yield line

process = subprocess.Popen("/bin/bash", shell=True,
    stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.write("echo '### DONE ###'\n")
process.stdin.flush()

# Note, I don't read stderr here, so if subprocess outputs too much there,
# it'll fill the pipe and stuck. If you don't need stderr data, don't
# redirect it to a pipe at all. If you need it, make readlines read two pipes.
stdout = "".join(line for line in readlines_upto(process.stdout))
print "stdout: " + stdout

# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
Community
  • 1
  • 1
drdaeman
  • 11,159
  • 7
  • 59
  • 104
0

As per the manual:

Popen.communicate(input=None)

    Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. [...]

You need to read from the pipe instead:

import os
stdout = os.read(process.stdout.fileno(), 1024)
print "stdout: " + stdout

If there's no data waiting, it will hang there forever or until data is ready to be read. You should use the select system call to prevent that:

import select
import os

try:
    i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
    stdout = os.read(i[0].fileno(), 1024)
except IndexError:
    # nothing was written to the pipe in 5 seconds
    stdout = ""

print "stdout: " + stdout

If you want to fetch multiple writes, to avoid race conditions, you'll have to put it in a loop:

stdout = ""
while True:
    try:
        i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
        stdout += os.read(i[0].fileno(), 1024)
    except IndexError:
        # nothing was written to the pipe in 5 seconds, we're done here
        break
netcoder
  • 66,435
  • 19
  • 125
  • 142
  • I tried this, but if there is more than one line per command it only seems to put the first line into stdout. For example: https://gist.github.com/daviddoria/fac54d3e8f2e01d21036 – David Doria Apr 17 '14 at 20:36
  • This is because there may be a delay between when it is written and read from the pipe. Per example, if I run your gist locally, sometimes I get one line, sometimes two. It's a race condition. To avoid it, you need to put it in a loop and `break` if the timeout is reached. Without knowing in advance the amount of data you're going to receive in advance, the only thing you can do is rely on a timeout. I updated the answer. – netcoder Apr 17 '14 at 20:44