3

There is a decent amount of material about these topics on stack overflow, none of which answers exactly my question (see below) but it helped me find possible solutions. However I am wondering how robust these solutions are (i.e is there some obvious flaw in there that I missed...) and if there is a better / cleaner / more pythonic way to address this. The stack overflow threads I used for this are the following:

[1] Python read named PIPE

[2] unix pipe multiple writers

[3] How do I properly write to FIFOs in Python?

My problem is the following: I have a python executable that will be running permanently on a machine. It needs to read some messages from other processes (for instance, an admin SSHing into the machine must be able to send it a message ; potentially other batch executables could also send it messages). Each message is a line of text (terminated with a unix '\n' character). I want to implement this using unix named pipes: the executable will create a named pipe at startup and open it listening for messages. Then sending a message is simply writing a line to that named pipe.

Constraints: I do not expect a high volume of messages, however I do expect concurrent message writing to behave reasonably. I do not want a message to be ignored just because the admin was unlucky enough to send it at the same time another message was sent by another person or program.

The general solution recommended for this (in particular, see [1] above) seems to be a double loop of some sort:

fifo = '/path/to/namedpipe'
os.mkfifo(fifo)
def read_fifo(fifo):
    while True:
        with open(fifo) as fh:
            for line in fh:
                print "Read command: " + line

This seems to work well but has at least three race conditions I can think of:

(a) The inner 'for' loop terminates when all writers have closed their file handles and all the data they have written has been iterated through. If at this point a writer comes in and writes a message (that is: opens the pipe again, writes, closes the pipe) before the pipe has been closed and opened again by the outer 'while' loop of the reader, then the message written by this last writer will not be read by the next iteration of the outer while loop in the reader. It is lost.

(b) After the inner 'for' loop terminates, if a writer opens the pipe for writing but stops there, then the pipe is closed by the reader (because we exit the 'with' block) and then the writer tries to write something to the pipe before it is opened again by the reader, this writer will get a "Broken Pipe" error.

(c) It looks like in some cases at least, the inner 'for' loop will only begin processing elements when all writers have closed their file handles (some sources, like [1], even suggest a call to File.read() which is guaranteed to always behave this way). This means that if there is always an active writer, nothing will ever be processed by the reader.

(c) doesn't really bother me, but (a) and (b) are a problem.

What I tried:

The problem seems to be that the reader has to close / open the pipe repeatedly, which leads to errors or messages being dropped. I tried to get rid of the double loop by opening the pipe only once in the reader, and then continuously looping, reading from the pipe using the readline() method. The main problem is that when all writers close their handles, readline() does not wait for input (it immediately returns an empty string). So to avoid looping very fast and using tons of CPU my only two options are (1) sleeping, or (2) opening the pipe for writing inside...the reader.

(1) looks like this:

def read_fifo(fifo):
    with open(fifo) as fh:
        line = fh.readline()
        while True:
            time.sleep(1)
            # We don't want to print all the empty lines returned by readline() when no writer have the pipe opened
            if line:
                print "Read command: " + line
            line = fh.readline()

(2) looks like this:

def read_fifo(fifo):
    with open(fifo) as fh:
        dummy_file_handle = open(fifo, 'w')
        line = fh.readline()
        while True:
            print "Read command: " + line
            # Here, readline() should always block since there is always at least one process holding an open write handle to the pipe
            line = fh.readline()
        dummy_file_handle.close()

Both options above look a bit hacky, whereas what I am trying to do looks like pretty basic usage of named pipes. Do you guys think this will work properly ? Am I missing something, an easier and nicer way to do this ?

thanks in advance for your help!

Pierre
  • 31
  • 3
  • What's the chances that someone is going to send something else at the exact same time? – thesonyman101 May 25 '17 at 01:20
  • Also if you using time.sleep you could have it send a response when it has been received and keep it in a while loop of sending till it receives a response. – thesonyman101 May 25 '17 at 01:22
  • @thesonyman101 Given that a write is not an instantaneous action, the chances are *very* good. – chepner May 25 '17 at 01:28
  • True especially if it is a big file. – thesonyman101 May 25 '17 at 01:30
  • maybe you could make a list and as commands come in if one is currently in progress it adds it to the "queue" – thesonyman101 May 25 '17 at 01:33
  • @chepner [2] above suggests write to a pipe is indeed atomic as long as the message is short enough (<512 bytes on most platforms) which can be assumed to be true in my case. – Pierre May 27 '17 at 10:40
  • @thesonyman101 The read_fifo loop is single threaded so if a message is being "processed" (i.e printed out in the examples above) it will not do anything else. My problem is more about how to read these messages from the pipe in an efficient and safe way. – Pierre May 27 '17 at 10:43
  • Note that the problem with the first code sample (double loop) is not exactly about messages being written at the same time, but more about a second message being written when the reader exits the inner loop after reading a first message. – Pierre May 27 '17 at 10:47
  • @Pierre Instead of *assuming* that your writers can make atomic writes, they should be using locks to ensure no one else can write to the pipe until they are done. – chepner May 27 '17 at 13:13
  • @chepner I wasn't clear enough in my comment above. There is no need to *assume* anything. I know by contract that my messages are less than {PIPE_BUF} characters. IEEE Std 1003.1-2001 *guarantees* that such writes will be atomic. – Pierre May 28 '17 at 20:52

0 Answers0