4

This is code I'm using to test a webserver on an embedded product that hasn't been behaving well when an HTTP request comes in fragmented across multiple TCP packets:

/* This is all within a loop that cycles size_chunk up to the size of the whole 
 * test request, in order to test all possible fragment sizes. */
TcpClient client_sensor = new TcpClient(NAME_MODULE, 80);    
client_sensor.Client.NoDelay = true;    /* SHOULD force the TCP socket to send the packets in exactly the chunks we tell it to, rather than buffering the output. */
/* I have also tried just "client_sensor.NoDelay = true, with no luck. */
client_sensor.Client.SendBufferSize = size_chunk; /* Added in a desperate attempt to fix the problem before posting my shameful ignorance on stackoverflow. */
for (int j = 0; j < TEST_HEADERS.Length; j += size_chunk)
{
    String request_fragment = TEST_HEADERS.Substring(j, (TEST_HEADERS.Length < j + size_chunk) ? (TEST_HEADERS.Length - j) : size_chunk);
    client_sensor.Client.Send(Encoding.ASCII.GetBytes(request_fragment));     
    client_sensor.GetStream().Flush();   
}
/* Test stuff goes here, check that the embedded web server responded correctly, etc.. */

Looking at Wireshark, I see only one TCP packet go out, which contains the entire test header, not the approximately header length / chunk size packets I expect. I have used NoDelay to turn off the Nagle algorithm before, and it usually works just like I expect it to. The online documentation for NoDelay at http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.nodelay%28v=vs.90%29.aspx definitely states "Sends data immediately upon calling NetworkStream.Write" in its associated code sample, so I think I've been using it correctly all this time.

This happens whether or not I step through the code. Is the .NET runtime optimizing away my packet fragmentation?

I'm running x64 Windows 7, .NET Framework 3.5, Visual Studio 2010.

Sam Skuce
  • 1,666
  • 14
  • 20
  • 1
    I wonder if perhaps WireShark is combining the the packets after they are on the wire. It doesn't seem at all applicable in this case, but it apparently does have some [reassembly](http://wiki.wireshark.org/TCP_Reassembly) capabilities. This seems unlikely, and I'm probably just wasting your time by suggesting it as a possibility. – Mark Wilkins Jan 07 '12 at 00:05
  • 1
    I would advise against using derogatory terms such as "nanny-state" until you're sure that you're using the system correctly. The first rule of getting on high-horses is, "make sure yours isn't just a donkey." – Kennet Belenky Jan 08 '12 at 04:24
  • @KennetBelenky, noted, I've removed the offending text. I was just making a joke, though =) – Sam Skuce Jan 08 '12 at 06:32
  • 1
    @SamSkuce Thanks and NP. I just bristle a little bit because people often don't realize how many thousands of lines of annoying boilerplate C++ they get to avoid in exchange for the occasional weirdness of .Net. That said, I've banged my head against .Net many times, and every single time it's turned out that I was the one who hadn't grasped the whole picture. – Kennet Belenky Jan 08 '12 at 16:42

4 Answers4

2

TcpClient.NoDelay does not mean that blocks of bytes will not be aggregated into a single packet. It means that blocks of bytes will not be delayed in order to aggregate into a single packet.

If you want to force a packet boundary, use Stream.Flush.

Kennet Belenky
  • 2,755
  • 18
  • 20
  • No luck. The documentation for TcpClient.GetStream().Flush says this is only there "For Future Use". It's strange that when I don't embed this in a loop, e.g. writing two constant buffers through the socket while NoDelay is True, even on successive lines with no code delays in between, this has worked as I expected it to in the past. Thanks though! – Sam Skuce Jan 08 '12 at 04:39
  • Are you sure your sockets are in synchronous mode (Blocking = false)? If your sockets are in non-blocking mode, your writes might be aggregated into single packets. That will happen if two subsequent writes happen while the socket is still flushing a previous write. – Kennet Belenky Jan 08 '12 at 05:05
  • TCP is a stream protocol that is why you cannot make any assumptions that there can be "single packets". If you send 4 bytes and then 6 bytes - peer on the other side can receive 3 bytes and after another call to receive another 7 bytes. – Vadym Stetsiak Jan 08 '12 at 06:06
  • @VadymStetsiak, yes that's the way you're supposed to treat TCP when you're reading it, but the documentation for the NoDelay property (now linked in question text) clearly states that setting the property to true should result in sending the bytes in separate packets exactly as you write them. – Sam Skuce Jan 08 '12 at 06:34
  • @KennetBelenky, when I set blocking = false, it exceptions with "not allowed on a non-blocking socket" when I call the Send function. There's probably something else I need to do to make it work in non-blocking mode, but I think my plan now is to try and rewrite this in native C++ code come Monday. Thanks for all your help! – Sam Skuce Jan 08 '12 at 06:45
  • @SamSkuce, NoDelay description is oversimplified. NoDelay=true will turn on Nagle algorithm, that is every write operation will immediately send data over the network. However, remote peer's TCP stack can batch those packets together for the read operation hence render NoDelay=true useless in your situation. – Vadym Stetsiak Jan 08 '12 at 08:20
  • @SamSkuce The packet aggregation you're seeing and the exceptions you encountered have the same root cause. In non-blocking mode, if you do a write while another one is still completing, the socket will say, "no problem, throw it on the pile and I'll send it the next time I write." In blocking mode, the socket will say, "OH NOES! PEOPLE ARE ASKING ME TO DO THINGS TOO QUICKLY! I CAN'T DEAL WITH LIFE!" Socket.Poll can tell you if the socket is in a writeable state. There may be other ways as well. – Kennet Belenky Jan 08 '12 at 16:49
  • @SamSkuce And btw... writing this in C++ won't give you significantly more control. This is a property of how sockets work, not how .Net uses them. – Kennet Belenky Jan 08 '12 at 16:50
2

Grr. It was my antivirus getting in the way. A recent update caused it to start interfering with the sending of HTTP requests to port 80 by buffering all output until the final "\r\n\r\n" marker was seen, regardless of how the OS was trying to handle the outbound TCP traffic. I should have checked that first, but I've been using this same antivirus program for years and never had this problem before, so I didn't even think of it. Everything works just the way it used to when I disable the antivirus.

Sam Skuce
  • 1,666
  • 14
  • 20
  • Good find! For clarity, would you like to disclose the name of the antivirus software? – Crypth Nov 13 '15 at 07:53
  • @Crypth, it's Trend Micro. I haven't needed to rerun this test in the past 4 years, so I don't know if it's still doing it. – Sam Skuce Nov 17 '15 at 14:31
  • Cheers, been having some unexpected issues with HTTP requests for some customers, and we've been thinking in the lines of proxy rewriting the traffic, but obviously AV is a possible cause as well. – Crypth Nov 17 '15 at 15:04
1

The MSDN docs show setting the TcpClient.NoDelay = true, not the TcpClient.Client.NoDelay property. Did you try that?

holtavolt
  • 4,378
  • 1
  • 26
  • 40
  • No dice. I have used it my way successfully in the past as well. Thanks though! – Sam Skuce Jan 08 '12 at 04:38
  • A few ideas: Are you sure that the server receive buffer isn't full at the time of the send (which would cause the send to buffer until blocked)? Also - the MSDN docs show reading the NoDelay property back to test it was successfully set - try adding that to verify that the client/socket honored the request (don't know why it wouldn't). Finally, here's a related SO question on this topic: http://stackoverflow.com/questions/5523565/socket-flush-by-temporarily-enabling-nodelay – holtavolt Jan 08 '12 at 15:35
0

Your test code is just fine (I assume that you send valid HTTP). What you should check is why TCP server is not behaving well when reading from TCP connection. TCP is a stream protocol - that means you cannot make any assumptions on the size of data packets unless you explicitly specify those sizes in your data protocol. For instance you can prefix all your data packets using fixed-size (2 bytes) prefix, that will contain the size of the data to be received.

When reading HTTP usually read is made of several phases: read HTTP line, read HTTP headers, read HTTP content (if applicable). First two parts do not have any size specifications, but they have special delimiter (CRLF).

Here is some info how HTTP can be read and parsed.

Community
  • 1
  • 1
Vadym Stetsiak
  • 1,974
  • 18
  • 22