2

Does setting the socket receive buffer to a specified number of bytes, directly correlate to how many sized messages can be stored?

Example:
If a 100 byte message is being continuously sent over UDP to a socket buffer set at 4,000 bytes, can I expect the buffer to be able to hold 40 messages?

I thought that setting the buffer size, like so:

int size = 4000;
setsockopt(id, SOL_SOCKET, SO_RCVBUF, (char *)&size, sizeof(size));  

and letting the buffer fill from incoming packets, would result in a buffer containing 40 messages.

After turning off the UDP sender, and processing the buffer, that is not what I have observed.
Despite my messages being 100 bytes, it seems as though a 4,000 byte buffer could only hold about 4 messages.


How could 100 byte messages be taking up 1,000 bytes in the buffer?
Does this make sense? What is causing this, and how can I calculate a buffer size in accordance to how many messages can be held?


edit: duplicated question does not solve my problem.
The user there was calling setsockopt incorrectly.
I'm trying to find documentation that describes the relationship between a socket recieve buffer, and the number of sized messages that can actually be held.

Community
  • 1
  • 1
Trevor Hickey
  • 36,288
  • 32
  • 162
  • 271
  • You need to read the documentation for the APIs you're using. It states clearly that the system can adjust the actual buffer size, and that you should call `getsockopt()` to determine what the actual buffer size is. Remember also that the system has to store the UDP header somewhere as well as the payload. – user207421 Feb 15 '16 at 21:33
  • @EJP `getsockopt()` returns the same value that was set. Even though the messages are 100 bytes, and `getsockopt()` returns 4,000, it can only hold 4 messages. If I make the receive buffer 8,000, it can hold 8 messages. I'm just confused what's happening. Why would it need to allocate an extra 900 bytes for a 100 byte message. Is this normal behavior? Maybe it has something to do with alignment? Regardless of the message size, maybe it allocates 1,000 bytes per datagram. The linux documentation on these functions don't discuss this. – Trevor Hickey Feb 15 '16 at 21:37
  • Are you sure the messages are 100 bytes? What do you see in Wireshark? – dbush Feb 15 '16 at 21:42
  • @dbush Wireshark shows 100 bytes in terms of the message payload (which is the same number of bytes returned by recv()). Wireshark also shows 142 raw bytes across the wire. So I assume the header is 42 bytes. – Trevor Hickey Feb 15 '16 at 21:46
  • The UDP header is 8 bytes and the IP header is 20 or more. Wireshark will show you exactly. The socket receive buffer doesn't need to store the IP header. I don't know you're still fiddling about with these tiny buffer sizes after what I said yesterday in one of your [numerous other threads on this topic](http://stackoverflow.com/questions/35384411/why-are-particular-udp-messages-always-getting-dropped-below-a-particular-buffer). – user207421 Feb 15 '16 at 21:49
  • @EJP I see. I suppose the value of SO_RCVBUF can not be used to calculate exactly how many messages can be stored in the buffer. As far as I can tell, the OS is going to to use that buffer how it sees fit regardless of the incoming message sizes. I was just hoping there some documentation about this in regards to `setsockopt()`. – Trevor Hickey Feb 15 '16 at 21:52
  • @EJP Yes, I realize the buffer sizes are too small. The company I work at has had them this way for years and years. I just need a clear argument around the behavior of `setsockopt()`, and have all my facts straight, so that I can convince management, and my team members, to let me make the appropriate changes. Since we are working on embedded hardware with minimal space, and various connections all sharing the same socket functionality, I have gotten push back on my decision to increase buffer sizes. – Trevor Hickey Feb 15 '16 at 22:25
  • 1
    The smallest socket buffer size I've ever seen is 8k on some old Windows versions, and even that was always far too small even twenty years ago. There's a paper detailing a major throughput improvement changing it from 1k to 4k in BSD 4.2 or so, in about 1983. It was 28k in OS/2 in the early 90s, and 48k or so on most Unixes and Linuxes when I wrote my books ten and more years ago. People are now using megabytes. It seems to me you already have all the evidence you need. Another 24k isn't going to kill anybody surely? – user207421 Feb 15 '16 at 23:58

0 Answers0