0

I tried the official tcp echo server example server and client. With netstat -ano | findstr TIME_WAIT I can see the client causes a TIME_WAIT every time, while the server disconnects cleanly.

Is there anyway to prevent the TIME_WAIT or CLOSE_WAIT, that disconnect cleanly for both sides?

Here's the captured packets, seems the last ACK is sent correctly, but still there is TIME_WAIT on client side. enter image description here

aj3423
  • 2,003
  • 3
  • 32
  • 70

2 Answers2

5
  • CLOSE_WAIT is a programming error. The local application has received an incoming close but hasn't closed this end.

  • TIME_WAIT comes after a clean disconnect by both parties, and it only lasts a few minutes. The way to avoid it is to be the end that receives the first close. Typically you want to avoid it at the server, so you have the client close first.

user207421
  • 305,947
  • 44
  • 307
  • 483
  • So, for my server, I guess TIME_WAIT on client side is the normal behavior and the best result, it can't be CLEAN on both side, I mean no XXX_WAIT or any other status. – aj3423 Jan 26 '16 at 11:49
  • 2
    You need to get rid of this notion that TIME_WAIT isn't 'clean'. It is a deliberately designed feature of TCP and it is essential to its correct working. It is certainly preferable at least aesthetically to shift it to the client, where it won't be noticed, rather than piling up at the server where it certainly will. – user207421 Jan 26 '16 at 11:57
2

A long lingering CLOSE_WAIT is really a programming error (the OS performs the connection shutdown, but your application doesn't remember to free the socket in a timely manner -- or at all).

TIME_WAIT, however, is not really an exceptional condition. It is necessary to provide a clean closing on a connection that might have lost the very last ACK segment during the normal connection shutdown. Without it, the retransmition of the FIN+ACK segment would be responded with a connection reset, and some sensitive applications might not like it.

The most common way to have a smaller number of sockets in TIME_WAIT state is to shorten its duration globally, by tuning a global OS-level parameter. IIRC, there is also a way to disable it completely on a single socket through setsockopt() (I don't remember what option, however), but then you might occasionally send possibly unwanted RST segments to peers that lose packets during connection shutdown.

As to why you see them only on one of the sides of the connection, it is probably on the side that requested to close the connection first. It is it that sends the first FIN, receives the FIN+ACK, and sends the last ACK. If that last ACK is lost, it will receive the FIN+ACK again, and should resend the ACK, not RST. The other side, however, knows for sure that the connection is completely finished when the last ACK arrives, and then there is no need to wait for anything else on that socket -- if anything arrives to that host with the same pair of address+TCP port endpoints as the just-closed socket, than it should either be a new connection request (in which case a new connection might be opened), or it is some TCP state machine violation (and must be responded to with RST, or maybe some ICMP prohibited message).

Paulo1205
  • 918
  • 4
  • 9
  • 1
    TIME_WAIT is *definitely* on th side that closed first. TIME_WAIT is motivated by a lot more than just a missing ACK. – user207421 Jan 26 '16 at 05:30
  • The post was updated with image, seems the last ACK is sent. Another question is: Who is responsible for handling the last ACK, is it the `c++ code` or the `Windows operating system` ? – aj3423 Jan 26 '16 at 13:21
  • 1
    @aj3423 The operating system: more precisely, the TCP implementation in the network protocol stack. Sending and receiving the last ACK doesn't prevent TIME_WAIT. NB Code formatting is for code: please use ordinary quotation marks otherwise. – user207421 Jan 27 '16 at 00:24