0

I have a C application that runs a shell script. The C programs uses the following logic (at a high level):

  1. C application listens on port 7791, call this "webClient".
  2. Once webClient gets data via port 7791, among other things, it calls the fork command.
  3. Inside child process, run execve with my shell script.
  4. The shell script does various activities, including starting another C daemon that should not be listening on any ports (does not use sockets at all). Call this C daemon "emaildae". Only one daemon is started, obviously in the background.
  5. The webClient application waits for child process to finish, reporting any problems.
  6. The webClient application goes back to listening on port 7791.

When I run netstat -nlp | grep 7791, I see webClient listening, as I would expect. When I kill webClient, I now see emaildae listening on port 7791 (using netstat again). I have tried starting emaildae using nohup and disown, but neither nohup by itself nor nohup with disown solve the problem. I tried closing the socket using close(socketFd) in the child process, like I do in other places, but that did not work either. I know from various web searches that the socket descriptor gets passed to child processes, along with all file descriptors. What I don't know is how to prevent this from happening. Maybe I closed it wrong. If there is a way to close the socket descriptor without impacting the parent process, that might fix things. Any ideas?

Tony B
  • 915
  • 1
  • 9
  • 24
  • Look up the `O_CLOEXEC` file descriptor flag. Or close descriptors in the child after fork but before the exec – Shawn Mar 06 '22 at 04:01
  • 1
    Closing the listening socket in the child between `fork` and `execve` should do it. You must be doing something wrong, so you need to post your code. – Barmar Mar 06 '22 at 05:39
  • I I think I tried closing the child socket (from `childSocket = accept(listeningSocket, &sChildSocket, &size)`) instead of the listening socket ( from `bind(listeningSocket, &socket, sizeof(struct sockaddr_in));` and `listen(listeningSocket, 16);`). So maybe that was what I did wrong? – Tony B Mar 07 '22 at 01:02
  • @Barmar you were correct, I was closing the child socket, not the listening socket. This is also a repeat of [a more full answer here](https://stackoverflow.com/a/6019241/3329922), which I missed until I saw your phrasing. If you want credit for the answer, then add an answer and I will approve it. – Tony B Mar 07 '22 at 03:55

1 Answers1

0

It turns out @Barmer was correct, I was closing the wrong file descriptor. Once I closed the listening socket file descriptor, it worked.

Also, this is a repeat of a more complete answer - I missed it until I saw @Barmer's phrasing.

In my case, I do the following:

  1. Grab listening socket file descriptor, listeningSocketFd
  2. Do fork
  3. In the child process, before calling execve, I first close(listeningSocketFd);

I do nothing in the parent process right now. Not sure if that will cause me problems or not.

Tony B
  • 915
  • 1
  • 9
  • 24
  • It is excellent practice after a fork for *each* process to close all open file descriptors that it itself no longer needs. In some cases, such closures are necessary for proper behavior. Those closures by one process do not affect the other process's ability to use the file to which the fd refers. – John Bollinger Mar 08 '22 at 17:10