0

This is a follow-up to this question which I closed yesterday in haste

I have two Python processes piped together with the 2nd one reading from stdin. When the feeder process (which writes its output to stdout) stops (e.g. is killed), I expected the code below to generate an exception, as was suggested by others:

    while True:
    try:
        l = sys.stdin.readline()
        ## process l

    except Exception, e:
        ## handle exceptions

    except IOError, e:
        ## handle IO exceptions

        if e.errno == errno.EPIPE:
            ## handle EPIPE exceptions

However, that does not happen. Instead, sys.stdin.readline() simply returns an empty l.

So 2 questions:

  • Is it possible to modify this code to get an exception when the feeder process dies?
  • Can i somehow find the process ID of the feeder process inside the 2nd process? In other words, if my pipe is ./step1.py | ./step2.py I want to find the process ID of step1 inside step2. I tried os.getppid() but that returns the id of the bach process that runs step2, not step1.

Thanks!

Community
  • 1
  • 1
I Z
  • 5,719
  • 19
  • 53
  • 100

1 Answers1

0

First of all, why do you still stick to that while True: construction? It is dangerous as it may result in an endless loop. Doesn't for line in sys.stdin: work for you?

Then, if you do not get this exception, it is not a bad thing. It just means that the 'feeder' properly closed its end of the pipe and the consumer noticed it, which is why readline() returns nothing (once? often? does it block? is there an endless loop again?).

I don't think there is a neat way to determine the process ID of another process that is just connected via pipe other than explicitly communicating it. The operating system, of course, keeps this information: https://serverfault.com/questions/48330/how-can-i-get-more-info-on-open-pipes-show-in-proc-in-linux

I've often piped data into Python scripts that read it via for line in sys.stdin. This way, the loop just ends when the feeding process terminates and closes his side of the pipe properly.

Community
  • 1
  • 1
Dr. Jan-Philip Gehrcke
  • 33,287
  • 14
  • 85
  • 130
  • OK I replaced the "while" with the "for" and on some simple tests it worked, which is great. Killing the feeder process still did not produce any exceptions, the 2nd script just finished. This should work I think although I am trying to think of a case when the feeder process would have some sort of momentary hiccup which would cause the 2nd script to end -- I need to avoid this type of behavior if possible. Probably need to figure out a way to get the pipe process info to check if the feeder is still alive. – I Z Sep 11 '12 at 14:54
  • Kill your feeder process hard with SIGKILL, i.e. `kill -9 $PROCESSID` in order to observe how your consumer behaves in this case. This is kind of the worst scenario that can happen. – Dr. Jan-Philip Gehrcke Sep 11 '12 at 15:07
  • Yeah that's exactly what I've been doing. Plus dumping by `cat` files that include empty lines. – I Z Sep 11 '12 at 15:27