I am afraid that the O_NONBLOCK
writing into a named fifo will lead to a out-of-memory-problem on linux:
I made a little experiment, what will happen, when the writer process...
- sets the process signal for broken pipe (
SIGPIPE
) to ignore,
- opens a named pipe with
O_NONBLOCK
- and then writes a lot of stuff into that pipe.
This is actually integrating the idea of the ftee-program posted here in the writer program itself.
On the other side, i will read out the content of the named fifo after quite a while into a file and examine it - thereby i mean "after the writer process has produced way more than 64K of data".
result is, that the file contains all the output of the writer process - this proves that way more than 64K have been buffered by linux.
This poses some questions to me:
- is there another limit beyond the 64K named pipe buffer size?
- or willl it increase until linux runs out of memory?
Background: - that is my writer program writer.pl
:
#!/usr/bin/perl -w
use strict;
use Fcntl;
my $fifo_name = '/tmp/fifo1';
sub daemon()
{
my $pid = fork();
if ($pid < 0) { die "fork(): $! \r\n"; }
if ($pid > 0) { exit(0); }
close(STDIN);
close(STDOUT);
}
sub main()
{
`mkfifo $fifo_name`;
$SIG{'PIPE'} = "IGNORE"; # ignoring SIGPIPE
my $fifo_fh = undef;
sysopen($fifo_fh, $fifo_name, O_NONBLOCK | O_RDWR) or die $!;
my $n = 0;
while (1)
{
my $line = "This is line $n...\n";
syswrite($fifo_fh, $line, length($line));
select(undef, undef, undef, 0.01); # sleep 1/100 second
$n++;
}
}
daemon();
main();
... and thats what i tested:
$ ./writer.pl
... wait a while ...
$ cat /tmp/fifo1 > dump.txt
... since this will never terminate, hit Ctrl-C after a while, then
$ less dump.txt
This is line 0...
This is line 1...
This is line 2...
...
This is line 61014...
This is line 61015...
All output of writer.pl since the beginning was stored somewhere!