13

Our high throughput application (~1gbps) benefits greatly from a large ReceiveBufferSize and SendBufferSize.

I noticed on my machine I can have a 100 MB buffer size with no problems but on some client and test machines the max value is a little over 10 MB and seems to be variable.

Are there any methods to query the system what the max tx/rx buffer size can be.

Joe
  • 964
  • 1
  • 10
  • 27

2 Answers2

6

Actually for high performance networking the SO_RCVBUF and SO_SNDBUF options should be set to 0 to avoid buffer copies, as per KB181611:

If you use the SO_RCVBUF and SO_SNDBUF option to set zero TCP stack receive and send buffer, you basically instruct the TCP stack to directly perform I/O using the buffer provided in your I/O call. Therefore, in addition to the nonblocking advantage of the overlapped socket I/O, the other advantage is better performance because you save a buffer copy between the TCP stack buffer and the user buffer for each I/O call. But you have to make sure you don't access the user buffer once it's submitted for overlapped operation and before the overlapped operation completes.

The max values you can set these options (which are the real setting behind the managed Socket.ReceiveBufferSize) are 'implementation dependent'. Other TCP parameters are documented at TCP/IP Registry Settings.

Remus Rusanu
  • 288,378
  • 40
  • 442
  • 569
  • Is it recommended to set Socket.ReceiveBufferSize = 0 on Win XP - Win 7? It's a bit strange that the default value is 8K when 0 could be a better setting. – Joe Jun 10 '11 at 14:24
  • After browsing the TCP/IP Registry Settings pages from your link, I was not able to find the max buffer size. Does anyone know what it's called? – Joe Jun 10 '11 at 15:11
  • The default 8K is for run-of-the-mill apps. Writing high perf apps is different in many aspects. http://rusanu.com/2008/11/11/high-performance-windows-programs/ – Remus Rusanu Jun 10 '11 at 18:30
  • @Remus, Excellent resource and excellent answer! Thank you for all this information and these links. Does setting the buffer to 0 have any ill effects on pre-vista OS's? – Joe Jun 10 '11 at 21:29
  • Doesn't have ill effects, but your app has to behave accordingly. See the note about 'you have to make sure you don't access the user buffer once it's submitted'. – Remus Rusanu Jun 10 '11 at 21:39
  • @Remus, I'm doing some testing on this. Since I'm calling Socket.Send from managed code (.net), I think it's possible a copy is happening during the pinvoke under the scenes. If that is the case the 'you have to make sure you don't access the user buffer once it's submitted' may not apply ONLY if you are calling Send from managed code. – Joe Jun 13 '11 at 16:40
  • This is also a bit invalid if he was using UDPs. – poy Jun 20 '12 at 19:41
  • 1
    This does **not** work for my UDP listener. http://stackoverflow.com/questions/18483836/cause-of-high-udp-package-loss-on-localhost – l33t Aug 28 '13 at 12:30
  • Is it possible to configure the network card's buffer ? In order to avoid losing data – Guillaume Paris Sep 10 '15 at 08:11
  • @Guillaume07 please ask new questions as separate questions, not as comments on old questions – Remus Rusanu Sep 10 '15 at 08:12
  • @RemusRusanu done http://stackoverflow.com/questions/32496691/set-tcp-stack-buffer-to-0-and-network-interface-buffer – Guillaume Paris Sep 10 '15 at 08:26
3

Those two properties internally play with the socket options (via SetSocketOption, eventually to the native setsockopt). If memory serves these are going to depend on the non-paged pool memory available (which changes machine to machine) and potentially which network driver is on each machine.

Regardless, you actually aren't guaranteed that the buffer size you requested is used, you'll have to retrieve the current buffer size after the fact to make sure it was used. Moreover, on Windows 7 and 2008 machines it is my understanding that your buffers may be dynamically increased/decreased.

In short, you likely can only test increasing buffer sizes and take the maximum that does not cause an error. There are too many variables at play which could determine the maximum size.

user7116
  • 63,008
  • 17
  • 141
  • 172
  • Thanks for the answer. I really am not a fan of "you likely can only test increasing buffer sizes and take the maximum that does not cause an error." but this does indeed seem to be the case as I cannot find any real answers to my question. – Joe Jun 10 '11 at 21:18