What the C standard says about the order of bytes in memory is in C 2018 6.2.6 2:
Except for bit-fields, objects are composed of contiguous sequences of one or more bytes, the number, order, and encoding of which are either explicitly specified or implementation-defined.
This does not say there is any relationship between the order of bytes in a short
and the order of bytes in an int
, or in a long
, a long long
, a double
, or other types. It does not say the order is constrained to only certain permissible orders, such as that one of the four orders you list must be used. There are 4! = 24 ways to order four bytes, and it would be permissible, according to the C standard, for a C implementation to use any one of those 24 for a four-byte int
, and for the same C implementation to use any one of those 24, the same or different, for a four-byte long
.
To fully test what orders a C implementation is using, you would need to test each byte in each type of object bigger than one byte.
In most C implementations, it suffices to assume bytes are in big-endian order (most significant byte first, then bytes in order of decreasing significance) or little-endian order (the reverse). In some C implementations, there may be a hybrid order due to the history of the particular C implementation—for example, its two-byte objects might have used one byte order due to hardware it originally ran on while its four-byte objects were constructed in software from two-byte objects that were ordered based on the programmer’s choice.
A similar situation can arise with larger objects, such as a 64-bit double
stored as two 32-bit parts.
However, variants with other orders, such as the bytes 0, 1, 2, and 3 (denoted by significance) stored in the order 3, 0, 1, 2, would arise only in perverse C implementations that technically conform to the C standard but do not serve any practical purpose. Such possibilities can be ignored in ordinary code.
To explore all possibilities, you must also consider the order in which bits are stored within the bytes of an object. The C standard requires that “the same bits” be used for the same meaning only between corresponding signed and unsigned types, in C 2018 6.2.6.2 2:
… Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type…
Thus, a C implementation in which bits 3 and 4 of the first byte of an int
represented 23 and 24 but represented 24 and 23 in a long
would technically conform to the C standard. While this seems odd, note that the fact the standard specifically constraints this for corresponding signed and unsigned types but not for other types suggests there were C implementations that assigned different meanings to corresponding bits in different types.