8

I want to account for a possible scenario where clients of my TCP/IP stream socket service send data to my service faster than it manages to move the data to its buffers (I am talking about application buffers, naturally) with recv and work with it.

So basically, what happens in such scenarios?

The way I see it, some facility beneath my service has to receive pieces of the incoming stream and store these somewhere until I issue 'recv', right? Most certainly the operating system. What happens if it runs out of memory to store the pieces while my service is not receiving them fast enough?

I don't want to re-open old questions, but I can't seem to find an answer to this seemingly obvious one?

Armen Michaeli
  • 8,625
  • 8
  • 58
  • 95

2 Answers2

8

TCP provides flow control . The TCP stack (both on the sender and receiver side) will be able to buffer some data for you, and this is usually done in the OS kernel.

When the receiver buffers fill up, the sender will know about it, and stop sending more data, eventually leading to the sending application blocking(or otherwise not being able to send more data) until space becomes available again.

Shortly described, every TCP packet(segment) sent includes the size of data that can be buffered - the window size. This means the other end at all times know how much data it can send without the receiver throwing it away because the buffers are full. If the window size becomes 0, buffers are full and no more data will be sent (and in case of the sender being blocking, a send() call will block), Theres procedures for probing whether the tcp window is still 0, so sending can resume again when the data has been consumed.

There's some more details here

nos
  • 223,662
  • 58
  • 417
  • 506
  • Not quite. Every TCP *acknowledgment* contains the current window size. Wikipedia is incorrect on this point. The correct reference is not Wikipedia but RFC 793. – user207421 May 17 '11 at 10:26
  • Are there any approximate values on buffer size one might consider typical/safe? Or is it too varied to make any such statement. – Mr. Boy Jan 22 '20 at 15:56
1

It's network driver stack that maintains data buffers (including the ones for incoming data). If the buffer is filled, consequent TCP packets are dropped, and the client is stuck trying to send the data. There's a bit more on this here and here.

Community
  • 1
  • 1
Eugene Mayevski 'Callback
  • 45,135
  • 8
  • 71
  • 121
  • 3
    No. A zero window is advertised to the sender and the sender stops sending. Ultimately the sender's socket send buffer fills up and the application blocks unless it is in non-blocking mode, in which case it gets EAGAIN or EWOULDBLOCK. TCP packets would only get dropped if the sender wasn't implementing TCP correctly and sending into a zero-sized window. – user207421 May 17 '11 at 10:28
  • 1
    @EJP your comment is only partially correct. On IP level the packets *are* discarded. My reply is incorrect in reference to TCP packets (which I guess was a typo), yet the client is still blocked (even with your comment). – Eugene Mayevski 'Callback May 17 '11 at 12:10
  • 1
    The question is about TCP. TCP segments consist of IP packets. On the IP level the packets are only discarded if the enclosing segments are sent, and they are only sent if the window permits, and the window doesn't permit, so they shouldn't be sent. – user207421 May 18 '11 at 06:04