3

I am writing a service script for a C project that I am working on and it executes a few utilities on startup. I want to capture all of the output using a logging utility. I have something like the following in /etc/rc5/myscript

#!/bin/bash    
#save fd 1 in fd 3 for use later
exec 3<&1
$SERVICESCRIPT | logger

The logger merely reads from stdin until it hits EOF. The second script is where it checks if a bunch of utilities are running and fires off a few of its own. Among these utilities, there is one which forks and becomes a daemon process. Now since I am running it from the script it inherits all of the scripts fds. This causes the script never to return back to command line after being invoked.

I have tried to counter this in a few ways:

First, in my script which kicks of the daemon process I have done the following:

(
exec 4<&-
exec 3<&-
$daemon_process
)

This should launch a subscript, close 3 and 4 (used for storing stdout and piped output respectively) and run the program. but I still get a hang when attempting to come back to command line which leads me to believe that the pipe was not closed. Upon further investigation, if I put an echo after the close and redirect them to the fd which was piped to the logger I do see them in the log telling me that the fd is indeed still in tact. If I do a close on fds 2-4 in the c program I see it return back to command line, however this is a very messy and unpleasant fix.

Second I tried the following:

$daemon_process 4<&- 3<&-

which should close the fds when calling the program, but alas I see the same result of the script never coming back to the command line.

When the script hands I can "CTRL-C" it to get it back to command line, but this is by no means a solution.

Any ideas?

THANKS!!!!

blrg891
  • 52
  • 2

2 Answers2

0

Your /etc/rc5/myscript isn't blocking because of anything inside of $SERVICESCRIPT. It's blocking because it's waiting for logger to terminate, which isn't going to until everything writing to its STDIN has terminated (which is your daemon, in this case).

You can see this behavior with this simplified example. Consider this simple C program that orphans itself and then does nothing forever:

#include <stdlib.h>

int main( int argc, char *argv[] ){
        if( fork() ){
                exit( 0 );
        }
        while( 1 ){
                sleep( 1 );
        }
        return EXIT_SUCCESS;
}

and this simple "logger" that just reads from STDIN until EOF:

#include <stdio.h>

int main( int argc, char *argv[] ){
        char c;
        while( 1 ){
                c = getc( stdin );
                if( c == EOF ){
                        break;
                }
        }
        return 0;
}

If I run these together, I won't get my command prompt back.

$ ./forktest | ./logger
<hangs>

This is because my shell is waiting for the whole pipeline to finish. forktest "finishes" (it kills itself) but logger doesn't finish and this is what we're waiting on. The orphan process of forktest is holding open the STDIN of logger. You see the pipe from STDOUT (fd 1) of the orphan (notice its parent process is 1) going to STDIN (fd 0) of logger by checking /proc/$pid/fd in another terminal while the above is running:

$ ps -ef | grep forktest
cneylan  25451     1  0 16:27 pts/7    00:00:00 ./forktest
$ ps -ef | grep logger
cneylan  25450 24379  0 16:27 pts/7    00:00:00 ./logger
$ ls -l /proc/25451/fd
total 0
lrwx------ 1 cneylan cneylan 64 Jul  2 16:28 0 -> /dev/pts/7
l-wx------ 1 cneylan cneylan 64 Jul  2 16:28 1 -> pipe:[944400]
lrwx------ 1 cneylan cneylan 64 Jul  2 16:28 2 -> /dev/pts/7
lrwx------ 1 cneylan cneylan 64 Jul  2 16:28 3 -> /dev/pts/7
$ ls -l /proc/25450/fd
total 0
lr-x------ 1 cneylan cneylan 64 Jul  2 16:28 0 -> pipe:[944400]
lrwx------ 1 cneylan cneylan 64 Jul  2 16:28 1 -> /dev/pts/7
lrwx------ 1 cneylan cneylan 64 Jul  2 16:28 2 -> /dev/pts/7
lrwx------ 1 cneylan cneylan 64 Jul  2 16:28 3 -> /dev/pts/7

As a side note, doing ^C when this happens will only signal the logger process, since your daemon [was supposed to have] called setsid(2), one of the necessary steps in daemonizing itself. So either ^C was killing your daemon and you need to have your code call setsid(2) or your code already calls setsid(2) and you have a bunch of rogue daemons running in the background :)

Christopher Neylan
  • 8,018
  • 3
  • 38
  • 51
  • I wanted to close the piped fds before calling the daemon program so that the program does not hold it open forever. The issue is that the commands in bash which I expect to do that don't actually do that. In essence I want to close any piped fd for "forktest" before it is even called. I use the Linux supplied daemon() function. I have tried this with (1,1) arguments as well as (1,0). In the case of (1,1) the daemon function should redirect 0-2 fds to /dev/null. However, I want to preserve them as to write errors to the screen so closing the piped fds in the script is preferable. – blrg891 Jul 02 '13 at 21:02
  • having random open file descriptors isn't causing your problem. your problem is that your `/etc/rc5/myscript` is waiting for `logger` to finish. – Christopher Neylan Jul 03 '13 at 13:02
  • Right, but the logger is waiting for EOF which will never come because the daemon process is holding a file descriptor with the pipe open. Once the logger gets EOF it finishes. – blrg891 Jul 03 '13 at 14:07
  • Well, it would seem that I cannot get what I want out of bash. This is an interesting related topic: http://stackoverflow.com/questions/5713242/linux-fork-prevent-file-descriptors-inheritance What I did to solve my issue is look for the special message at the end of the service startup indicating a successful start and have the logger log it, sleep for a second, then close. Granted, it's not a great solution, but it is sufficient and better than manually close file descriptors in every daemon utility it boots. – blrg891 Jul 05 '13 at 17:57
  • well, in general, daemons should close all of their in/out handles and either write to a logfile that it opens or use syslog. so if you want your daemon to generate output, then you should do one of those, but in either case, the daemon is handling it internally. on the other hand, if you're only interested in using your logger to log a "success" message, then you should start your daemon and then check $? to write a relevant message. – Christopher Neylan Jul 09 '13 at 16:39
0

As you correctly recognized, the blocking of logger is a matter of the pipe between it at the read end and (ultimately) the $daemon_process at the write end. Since you want the latter's output to be written to the screen,

$daemon_process >/dev/tty

would solve the problem.

Armali
  • 18,255
  • 14
  • 57
  • 171