Recently I have been bitten by the FD_SET
buffer overflow twice. The first time is we have too much socket (1024+) to added into the FD_SET
. This is an test case, we have disabled it, and add assert
to detect this case.
Today we hit another related issue when we run an test case for 1000+ times. Each time, the test case will somehow trigger to allocate an socket, and later release it before the test case finished. This test case will hit FD_SET
buffer overflow when we run 1000+ time.
We have find the root cause:
- For each pass, the allocate socket id will increase(+1), it will not reuse the socket id in a long time. The
Operating system
isMAC
, and I think it is an reasonable design to avoid using already released socket without error happen. FD_SET
only set thefd_set
bit array using socket id as index, if the socket id is large, it will overflow. I thinkfd_set
is an bad design.
We think the 1000+ is an reasonable number. And we don't think define MACRO to set 'fd_set' huge is not reasonable and wasting memory and CPU when wait.
We doesn't know how to resolve it, so any suggestion?
-------------Edit1----------------
It turn out there is socket leak in other place, which violate destructor should release all resource. And this make the socket id increase.
So item #1 is not true. Operating system will reuse the socket id.
But anyway, the discuss is helpful, and the FD_SET
is bad design, and we should using poll()
.