102

As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?

Anirudh Ramanathan
  • 46,179
  • 22
  • 132
  • 191
Neji
  • 6,591
  • 5
  • 43
  • 66
  • 53
    despite the fact that it has been closed down, this page was quite helpful – Goaler444 Apr 08 '13 at 10:59
  • 7
    From [this product guide](https://www.wolfvision.com/wolf/commands_cynap_wolfvision/protocol_command.htm), under the **Big Endian** link: *Networks generally use big-endian order, and thus it is called network order when sending information over a network in a common format. The telephone network, historically and presently, uses a big-endian order; doing so allows routing while a telephone number is being composed. [...]* Presumably the early computer networks relied on the telephone networks of the day, and the rest is history... – atravers Dec 24 '20 at 02:02
  • At the time the "standard" was created the majority of the servers were big-endian. Nowadays it is the opposite, but we cannot change the TCP/IP protocol due to backwards compatibility. New protocols can use little-endian though – Bernardo Ramos Apr 13 '21 at 16:57
  • ...but if you are thinking of using little-endian in your shiny-new network protocol, [this should interest you](https://www.cnn.com/TECH/space/9909/30/mars.metric/) - humans switching between fundamentally-different formats or systems is a fraught exercise... – atravers Jul 18 '21 at 01:38

1 Answers1

88

RFC1700 stated it must be so. (and defined network byte order as big-endian).

The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant octet on the right.

The reference they make is to

On Holy Wars and a Plea for Peace 
Cohen, D. 
Computer

The abstract can be found at IEN-137 or on this IEEE page.


Summary:

Which way is chosen does not make too much difference. It is more important to agree upon an order than which order is agreed upon.

It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.

Community
  • 1
  • 1
Anirudh Ramanathan
  • 46,179
  • 22
  • 132
  • 191
  • 1
    RFC 3232 appears to say "RFC1700 is obsolete" without giving any replacement – M.M Apr 27 '16 at 02:43
  • 23
    @Anirudh, This "answer" is avoiding the question. The question is asking for the underlying reason why bigendian is chosen instead of the alternative([s](https://en.wikipedia.org/wiki/Endianness#Middle-endian)). Re "*Which way is chosen does not make too much difference*",, this is false because in reality it matters due to the simple fact that performance matters (and such a standard is entrenched in the very bottom layers of network communications). – Pacerier Oct 02 '16 at 07:38
  • 4
    @Pacerier There wouldn't be a difference in terms of performance, which is what the linked paper talks about in detail. – Anirudh Ramanathan Oct 04 '16 at 05:44
  • 2
    There is a significant difference. As a lot of network protocol parsers are written in C or a derivative of it for performance reasons, having little endian encoding on an Intel/AMD/little endian computer means a simple casting of a "void *" to a "struct *". If a conversion is needed, "htonl, htons, ntohl, ntohs" needs to be called on each field and inherently creates a copy. – Pierre-Luc Bertrand Feb 17 '23 at 19:04