I have this code in JAVA :
socket = new Socket("127.0.0.1", 10);
OutputStream os = socket.getOutputStream();
os = socket.getOutputStream();
int data=50000;
os.w.write(data.toByteArray());
os.write(ByteBuffer.allocate(4).putInt(data).array());
And in the C# :
byte[] ba = readint(networkStream);
networkStream.Flush();
if (BitConverter.IsLittleEndian) Array.Reverse(ba);
int i = BitConverter.ToInt32(ba, 0); //50000
It's all working fine but :
I saw this image :
But the part that interested me was the "Network Order - under Big Endian"
I read in wiki that
Many IETF RFCs use the term network order, meaning the order of transmission for bits and bytes over the wire in network protocols. Among others, the historic RFC 1700 (also known as Internet standard STD 2) has defined its network order to be big endian, though not all protocols do.
Question
if Java uses big endian and the tcp also uses "Networked order" - big Endian -
So Why - in my C# - I had to check if it's big endian ?
I mean I could do :
Array.Reverse(ba);
Without checking : if (BitConverter.IsLittleEndian) Array.Reverse(ba);
Am I right ?
If so , What about the case that some unknown source sends me data and I don't know if he sent it big or small endian ? He would have to send me a first byte to indicate right ? but the first byte is also subject to endianness.....Where is my misunderstanding ?