I have the following C++ - scenario under CentOS.
Process P1 contains:
- passive listening socket with descriptor D1.
- 1 incoming connection (socket with D2).
- 1 outgoing connection (socket with D3).
- thread T1 with 1 outgoing connection (socket with D4).
Incoming socket connections are created with:
- socket( AF_INET,SOCK_STREAM,0 );
- setsockopt( ...SO_REUSEADDR, ...);
- bind
- listen
- accept
Outgoing socket connections are created with:
- socket(AF_INET, SOCK_STREAM,0);
- connect
Process P1 is now forked to P2 and in P2 there should be a new outgoing connection in a new thread T2. I am not interested in the other old connections here.
What exactly do I have to consider here after the fork in P2? What are the best practices here? Are all my assumptions correct? I would realize it like the following:
After the fork
- I close D1 in P2 directly, because I don't want to listen in different processes at the same port, although this would be possible. Correct?
- Because all FD's were copied (Ref-Counted?), I can safely close D2 and D3 in P2 without endangering the communication in P1, right?
- T1 is "dead" when forking anyway and so I should close D4 in P2 here too, or?
- Finally, I can spawn T2 in P2 and create a new outgoing socket D5 there
Would a following scenario also be possible where I want to reuse D4?
After the fork
- I close D1 in P2 directly
- I close D2 and D3 in P2
- T1 is "dead" but I do not close D4 in P2
- I spawn T2 and share the usage of D4 together with T1 from P1. Is this race-conditon-safe? Is this a good/common practice/pattern?
General question:
If the live-time of P2 is always shorter than of P1 are not all descriptors automatically released/counted down after P2 terminates? Do I need to close any FD's in this case?