Can the "standard" subprocess pipeline technique (e.g. http://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline) be "upgraded" to two pipelines?
# How about
p1 = Popen(["cmd1"], stdout=PIPE, stderr=PIPE)
p2 = Popen(["cmd2"], stdin=p1.stdout)
p3 = Popen(["cmd3"], stdin=p1.stderr)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
p1.stderr.close()
#p2.communicate() # or p3.communicate()?
OK, it's actually a different use case, but the closest starting point seems to be the pipeline example. By the way, how does p2.communicate() in a "normal" pipeline drive p1? Here's the normal pipeline for reference:
# From Python docs
output=`dmesg | grep hda`
# becomes
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
I guess I'm ultimately interested in what kind of "process graphs" (or maybe just trees?) can communicate()
support, but we'll leave the general case for another day.
Update: Here's the baseline functionality. Without communicate(), create 2 threads reading from p1.stdout and p2.stdout. In the main process, inject input via p1.stdin.write(). The question is whether we can drive a 1-source, 2-sink graph using just communicate()