Typically receive-buffer sizes are modified to be larger because the code's author is trying to reduce the likelihood of the condition where the socket's receive-buffer becomes full and therefore the OS has to drop some incoming packets because it has no place to put the data. In a TCP-based application, that condition will cause the stream to temporarily stall until the dropped packets are successfully resent; in a UDP-based application, that condition will cause incoming UDP packets to be silently dropped.
Whether or not doing that is necessary depends on two factors: how quickly data is expected to fill up the socket's receive-buffer, and how quickly the application can drain the socket's receive-buffer via calls to recv()
. If the application is reliably able to drain the buffer faster than the data is received, then the default buffer size is fine; OTOH if you see that it is not always able to do so, then a larger receive-buffer-size may help it handle sudden bursts of incoming data more gracefully.
Is there any reason at all to modify receive buffer sizes when sockets
are monitored and read continuously, e.g. using select?
There could be, if the incoming data rate is high (e.g. megabytes per second, or even just occasional bursts of data at that rate), or if the thread is doing something between select()/recv() calls that might keep it busy for a significant period of time -- e.g. if the thread ever needs to write to disk, disk-write calls might take several hundred milliseconds in some cases, potentially allowing the socket's receive buffer to fill during that period.
For very high-bandwidth applications, even a very short pause (e.g. due to the thread being kicked off of the CPU for a few quanta, so that another thread can run for a quantum or two) might be enough to allow the buffer to fill up. It depends a lot on the application's use-case, and of course on the speed of the CPU hardware relative to the network.
As for when to start messing with receive-buffer-sizes: don't do it unless you notice that your application is dropping enough incoming packets that it is noticeably limiting your app's network performance. There's no sense allocating more RAM than you need to.