1

I was wondering about the order of sent and received bytes by/from TCP socket.

I got implemented socket, it's up and working, so that's good. I have also something called "a message" - it's a byte array that contains string (serialized to bytes) and two integers (converted to bytes). It has to be like that - project specifications :/

Anyway, I was wondering about how it is working on bytes: In byte array, we have order of bytes - 0,1,2,... Length-1. They sit in memory.

How are they sent? Is last one the first to send? Or is it the first? Receiving, I think, is quite easy - first byte to appear gets on first free place in buffer.

I think a little image I made nicely shows what I mean.

sending over tcp

  • 1
    Why do you think that the order might be reversed? You must have something in mind. – usr May 15 '15 at 13:07
  • 2
    Bear in mind that if you want to deal with "messages", it's up to you to do framing on top of TCP - the model in TCP is just two streams of bytes (one you're sending and one you're receiving). Calls to `Send` at one end are not matched one-one with calls to `Receive` at the other end. – Damien_The_Unbeliever May 15 '15 at 13:07

1 Answers1

4

They are sent in the same order they are present in memory. Doing otherwise would be more complex... How would you do if you had a continuous stream of bytes? Wait that the last one has been sent and then reverse all? Or this inversion should work "packet by packet"? So each block of 2k bytes (or whatever is the size of the TCP packets) is internally reversed but the order of the packets is "correct"?

Receiving, I think, is quite easy - first byte to appear gets on first free place in buffer.

Why on the earth the sender should reverse the bytes but the receiver shouldn't? If you build a symmetric system, both do an action or none does it!

Note that the real problem is normally the one of endianness. The memory layout of an int on your computer could be different than the layout of an int of another computer. So one of the two computers could have to reverse the 4 bytes of the int. But endianness is something that is resolved "primitive type" by "primitive type". Many internet protocols, for historical reason, are Big Endian, while Intel CPUs are Little Endians. Even internal fields of TCP are Big Endian (see Big endian or Little endian on net?), but here we are speaking of fields of TCP, not of the data moved by the TCP protocol.

Community
  • 1
  • 1
xanatos
  • 109,618
  • 12
  • 197
  • 280
  • Since C# (when converting int to a byte[] using the BitConverter class) uses little-endian, you can convert them using [HostToNetworkOrder](https://msdn.microsoft.com/en-us/library/system.net.ipaddress.hosttonetworkorder.aspx) and then back again with [NetworkToHostOrder](https://msdn.microsoft.com/en-us/library/system.net.ipaddress.networktohostorder.aspx) when sending them over a network. – Patrick May 15 '15 at 15:13
  • @Patrick .NET for `BitConverter` seems to use "local" endianness... Both [referencesource](http://referencesource.microsoft.com/#mscorlib/system/bitconverter.cs,8640d8adfffb155b) and the latest Mono version that used [Mono code](https://github.com/mono/mono/blob/mono-3.12.0-branch/mcs/class/corlib/System/BitConverter.cs) seem so. Now, sadly I don't have an XBox360 (it is Big Endian and supports .NET) to check this... Still the solution is correct... [HostToNetworkOrder](http://referencesource.microsoft.com/#System/net/System/Net/IPAddress.cs,09e851fed446e0f5) on ReferenceSource is ok – xanatos May 15 '15 at 16:33
  • Yes, you are correct. I forgot to mention "C# on Windows". – Patrick May 16 '15 at 15:36