Problem:
After a close() syscall that fails with EINTR or EIO it is unspecified whether the file has been closed. (http://pubs.opengroup.org/onlinepubs/9699919799/) In multi-threaded applications, retrying the close may close unrelated files opened by other threads. Not retrying the close may result in unusable open file descriptors piling up. A clean solution might involve invoking fstat() on the freshly closed file descriptor and a quite complex locking mechanism. Also, serializing all open/close/accept/... invocations with a single mutex may be an option.
These solutions do not take into account that library functions may open and close files on their own in an uncontrollable way, e.g., some implementations of std::thread::hardware_concurrency() open files in the /proc filesystem.
File Streams as in the [file.streams] C++ standard section are not an option.
Is there a simple and reliable mechanism to close files in the presence of multiple threads?
edits:
Regular Files: While most of the time there will be no unusable open file descriptors accumulating, two conditions might trigger the problem: 1. Signals emitted at high frequency by some malware 2. Network file systems that lose connection before caches are flushed.
Sockets: According to Stevens/Fenner/Rudoff, if the socket option SO_LINGER is set on a file descriptor referring to a connected socket, and during a close(), the timer elapses before the FIN-ACK shutdown sequence completes, close() fails as part of the common procedure. Linux does not show this behavior, however, FreeBSD does, and also sets errno to EAGAIN. As I understand it, in this case, it is unspecified whether the file descriptor is invalidated. C++ code to test the behavior: http://www.longhaulmail.de/misc/close.txt The test code output there looks like a race condition in FreeBSD to me, if it's not, why not?
One might consider bocking signals during calls to close().