0

I'm trying to encode a TCP header myself, but can't understand what is the right order of bits/octets in it. This is what RFC 793 says:

0                   1                   2                   3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|          Source Port          |       Destination Port        |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                        Sequence Number                        |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
...

This means that Source Port should take first two octets and the lowest bit should be in the first octet. This means to me that in order to encode source port 180 I should start my TCP header with these two bytes:

B4 00 ...

However, all examples I can find tell me to do it the other way around:

00 B4 ...

Why?

Community
  • 1
  • 1
yegor256
  • 102,010
  • 123
  • 446
  • 597

1 Answers1

2

This means that Source Port should take first two octets

Correct.

and the lowest bit should be in the first octet.

Incorrect. It doesn't mean that. It doesn't say anything about it.

All multi-byte integers in all IP headers are represented in network byte order, which is big-endian. This is specified in RFC 1700.

Community
  • 1
  • 1
user207421
  • 305,947
  • 44
  • 307
  • 483
  • looks like this question is relevant: http://stackoverflow.com/questions/13514614/why-is-network-byte-order-defined-to-be-big-endian – yegor256 Mar 02 '15 at 21:02