2

I'm writing a little Linux fifo test program.

I created a pipe using mkfifo mpipe. The program should perform one write per argument sent to it. If no arguments are sent, then it performs one read from the pipe.

Here is my code

int main(int argc, char *argv[])
{
    if (argc > 1)
    {
       int fd = open("./mpipe", O_WRONLY);
       int i = 1;
       for (i; i < argc; i++)
       {
           int bytes = write(fd, argv[i], strlen(argv[i]) + 1);
           if (bytes <= 0)
           {
               printf("ERROR: write\n");
               close(fd);
               return 1;
           }
           printf("Wrote %d bytes: %s\n", bytes, argv[i]);
       }

       close(fd);
       return 0;
    }

    /* Else perform one read */
    char buf[64];
    int bytes = 0;

    int fd = open("./mpipe", O_RDONLY);
    bytes = read(fd, buf, 64);
    if (bytes <= 0)
    {
        printf("ERROR: read\n");
        close(fd);
        return 1;
    }
    else
    {
        printf("Read %d bytes: %s\n", bytes, buf);
    }

    close(fd);
    return 0;
}

I would expect the behavior to be something like this...

I call ./pt hello i am the deepest guy, and I would expect it to block for 6 reads. Instead one read seems enough to trigger multiple writes. My output looks like

# Term 1 - writer
$: ./pt hello i am the deepest guy # this call blocks until a read, but then cascades.
Just wrote hello
Just wrote i
Just wrote am
Just wrote the # output ends here

# Term 2 - reader
$: ./pt
Read 6 bytes: hello

Could someone help explain this weird behavior? I thought that every read would have to match with a write with regards to pipe communication.

DeepDeadpool
  • 1,441
  • 12
  • 36
  • 1
    A single read can read data from multiple writes, as you’ve demonstrated. Atomicity of writes means that writes from separate processes won’t be interleaved; either all of P1’s message will precede all of P2’s message or vice verse, subject to size limits. – Jonathan Leffler Jun 06 '18 at 22:00
  • 3
    Put your `read()` in a loop. BTW: you are sending "hello" plus a NUL character (totoal size=6) – wildplasser Jun 06 '18 at 22:22
  • 1
    The writer process is not blocking until a read, it is blocking until the open. That is `open` in the writer is blocking until the reader calls `open`. Then the writer is filling up a buffer. Add some diagnostics around the open calls. – William Pursell Jun 06 '18 at 23:41
  • 1
    Whichever **open** is first blocks until the other (by default), but once both ends are open the data is buffered and the writer can get ahead of the reader by an amount that can depend on the system; [for (modern) Linux it defaults to 64kiB](https://stackoverflow.com/questions/4624071/pipe-buffer-size-is-4k-or-64k). – dave_thompson_085 Jun 07 '18 at 01:00
  • That explains why sometimes my output is non-deterministic; it's just a race condition. So if I modified my data to prepend the size of the incoming message, then the programs could look out for that. – DeepDeadpool Jun 07 '18 at 15:46
  • @jonathanleffler, Atomicity of writes is a consequence of blocking the inode in the `write(2)` call. This warrants two individual writes don't overwrite each other. – Luis Colorado Jun 08 '18 at 07:54
  • @wildplasser, sending the null is a good thing to delimit the actual strings.... this means you cannot `printf(3)` what you have received... as it will cut part of the received thing... but anyway, as my answer shows, you don't receive more than what was written. – Luis Colorado Jun 08 '18 at 07:56

1 Answers1

1

What is happening there is that the kernel blocks the writer process in the open(2) system call until you have a reader opening it for reading. (A fifo requires both ends connected to processes to work) Once the reader does the first read(2) call (either the writer or the reader blocks, who gets first for the system call) The kernel passes all the data from the writer to the reader, and awakens both processes (that's the reason of receiving only the first command line parameter, and not the first 16 bytes from the writer, you get only the six characters {'h', 'e', 'l', 'l', 'o', '\0' } from the blocking writer)

Finally, as the reader just closes the fifo, the writer gets killed with a SIGPIPE signal, as no more readers have the fifo open. If you install a signal handler on the writer process (or ignore the signal) you'll get an error from the write(2) syscall, telling you that no more readers were blocked on the fifo (EPIPE errno value) on the blocking write.

Just notice that this is a feature, and not a bug, a means of knowing that writes will not reach any reader until you close and reopen the fifo.

The kernel blocks the inode of the fifo for the whole read(2) or write(2) calls, so even another process doing another write(2) on the fifo will be blocked, and you'll not get the data on the reader from that second writer (should you have one). You can try if you like, to start two writers and see what happens.

$ pru I am the best &
[1] 271
$ pru
Read 2 bytes: I
Wrote 2 bytes: I
Wrote 3 bytes: am
[1]+  Broken pipe             pru I am the best  <<< this is the kill to the writer process, announced by the shell.
$ _
Luis Colorado
  • 10,974
  • 1
  • 16
  • 31