The rule is: close a socket when you're done using it; OTOH if you plan to keep receiving/sending data via that socket in the future, then keep it held open so you can do that. When it makes sense to keep a socket open (vs closing it and then re-opening a new TCP connection later) is a judgement call that will depend on what your program is trying to accomplish.
It sounds like your program might be suffering from a "socket leak", where in some cases your program forgets to call close()
on sockets that it no longer intends to use; and also your program keeps creating new sockets. In that case, these open-but-forgotten sockets will build up over time, and eventually your program will run out of resources and be unable to create any more sockets. This is a bad thing.
You don't say what language you're programming in, but if you're programming in C++, an easy way to avoid a socket-leak is to wrap each socket into a socket-holding object:
class SocketHolder
{
public:
SocketHolder(int socketfd) : _fd(fd) {/* empty */}
~SocketHolder() {if (_fd >= 0) close(fd);}
int GetSocketFD() const {return _fd;}
private:
SocketHolder(const SocketHolder &); // private and unimplemented (to prevent accidental copying of SocketHolder objects)
int _fd;
};
... and then whenever you call socket()/accept()/etc to create a new socket FD, immediately hand it over to a SocketHolder object which is held by a smart-pointer, e.g.:
int s = socket();
std::shared_ptr<SocketHolder> holder(new SocketHolder(s));
... and modify your program to store and/or pass around std::shared_ptr<SocketHolder>
objects instead of int
socket-values.
The advantage is that once you've done this, you no longer have to remember to call close(s)
explicitly at the appropriate time, because it will automatically be called for you when its SocketHolder
object is deleted, and the SocketHolder
object will be automatically deleted when the last smart-pointer referencing it goes away. Since the close()
call now happens without any effort on the programmer's part, a socket-leak is much less likely to occur.
Another possible source of your problem is that the OS will (by default) keep records for recently-closed TCP connections in memory for a short period even after you've closed them; doing this helps the OS ensure that any data sent to those sockets just before they were closed can get delivered, and also helps the OS avoid misinterpreting future TCP packets. However, it does mean that TCP socket records can hang around for a brief period after the TCP socket has been closed; if that's a problem for you, you might be able to address it by setting the SO_LINGER
socket option, as discussed here.
If you're going to be keeping TCP connections to dozens or hundreds of devices open simultaneously, that's fine; you can do so and multiplex across them using select()
or poll()
or epoll()
or kqueue()
or similar APIs designed for handling multiple sockets at once. In my experience, using non-blocking I/O is simpler in the long run, since it avoids the possibility that one very slow (or malfunctioning) client device might hang up your entire server.
If you really need to support thousands of simultaneous TCP connections instead of just dozens or hundreds, you might run into some scaling issues; you might want to read The C10K Problem, a somewhat dated but still informative article about various approaches to handling that size of connection-load.
As paulsm4 mentioned, another alternative might be to use UDP instead of TCP for your communications. The advantage of that is that you only need to create a single socket (which can be used to communicate with any number of devices) rather than a socket-per-device. Some disadvantages include the fact that it's often difficult to get UDP packets routed across the Internet (firewalls tend to reject incoming UDP packets by default), and also UDP packets don't get any delivery or ordering guarantees the way TCP streams do. Of course, if your devices speak only TCP, then UDP won't be an option for you.