2

In my program I get a freeze sometimes when writing to stderr in this case:

  1. Program starts (e.g. from Terminal)
  2. Program forks itself two times and using execvp to start each process with different parameters (original file is read from /proc/self/exe)
  3. The first started program quits.
  4. Now the two forked processes are still running
  5. Close the terminal the first program was started
  6. A few attempts using fprintf to write to stderr work, on some point I will get a complete lockup on my program. Debugger tells me its fprintf.

What is happening here? What I already tried is putting a SIG_IGN on SIGPIPE to prevent the program crash as soon as nobody is listening on the pipes anymore. But now I am stuck (the behavious with the Freeze is the same, with SIG_IGN and without it).

Any help is appreciated.

Nidhoegger
  • 4,973
  • 4
  • 36
  • 81
  • 2
    _Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. See: How to create a Minimal, Complete, and Verifiable example._ – Sourav Ghosh May 23 '16 at 08:17
  • The question has already been answered. – Nidhoegger May 23 '16 at 08:29
  • Right, but still, it should be useful to future readers. Explanation is required, IMHO. – Sourav Ghosh May 23 '16 at 08:41

1 Answers1

5

In a nutshell: The system sends your program signals to save you from a problem. You ignore those signals. Bad things happen.

When your parent program was run, it had stdin (fd 0), stdout (fd 1) and stderr (fd 2) connected to the TTY of the shell that ran you (the terminal). These function much like pipes. When you closed the terminal, these fds are left hanging, with no one on the other side to be able to communicate with them.

At first, nothing bad happens. You write to stderr, and the standard library caches those writes. No system calls are performed, so no problem.

But then the buffers fill up, and stdlib tries to flush them. When it does that, it fills up the kernel buffers for the pipe or TTY. At first, that works fine as well. Sooner or later, however, these buffers fill up as well. When that happens, the kernel suspends your processes and waits for someone to read from the other end of those pipes. Since you closed the terminal, however, no one ever will, and your programs are suspended indefinitely.

The standard way to avoid this problem is to disconnect the 0-2 file descriptors from the controlling TTY. Instead of telling you how to do that, I would like to suggest that what you are trying to do here, run a program so that it is completely disconnected from a TTY, has a name: daemonizing.

Check out this question for more details on how to do that.

Edited to add:

It was not clear from your function whether the programs you are execveing are your own or not. If they are not, please be aware that many user programs are not designed to run as a daemon. The most obvious caveat is that if a program unconnected to any TTY opens a TTY file, and unless it passes O_NOCTTY to open, that TTY becomes the controlling TTY of the program. Depending on circumstances, that might lead to unexpected results.

Community
  • 1
  • 1
Shachar Shemesh
  • 8,193
  • 6
  • 25
  • 57
  • Thank you very much, that explanation helped me very much. What I wanted to do anyway is log to a file. The logoutput to the terminal is just temporary during development. I guess that is a sign now to make the program log to a file instead :). Thank you very much! – Nidhoegger May 23 '16 at 08:29
  • @Nidhoegger, if you are writing to a file with `fprintf(stderr,...)`, be aware of the semantics. If stdio functions see that the destination is not a TTY, they will flush when the internal buffers are full, not when they see a EOL (`\n`). Depending on your case, this may or may not be what you want. – Shachar Shemesh May 23 '16 at 08:48
  • *You write to stderr, and the standard library caches those writes.* Not necessarily. `stderr` is normally completely unbuffered. Attaching `strace` to the running process would almost certainly show the system calls being made to fd 2. – Andrew Henle May 23 '16 at 09:08
  • @AndrewHenle, please read my commend immediately above yours. stderr is always buffered. If fd 2 itself is a TTY, however, fprintf will flush every time it sees a new line. Try your strace with a program that prints a single line in two calls, and you'll see. – Shachar Shemesh May 23 '16 at 10:47
  • @ShacharShemesh *stderr is always buffered* That is [wrong](http://man7.org/linux/man-pages/man3/stdout.3.html): "**The stream `stderr` is unbuffered**. The stream `stdout` is line-buffered when it points to a terminal. Partial lines will not appear until `fflush(3)` or `exit(3)` is called, or a newline is printed." – Andrew Henle May 23 '16 at 11:00
  • @AndrewHenle, I stand corrected. – Shachar Shemesh May 23 '16 at 11:26