In the C Standard f.e. (my reference is especially ISO/IEC 9899:2011 (C11)) under §3.6 is stated:
3.6
1 byte
addressable unit of data storage large enough to hold any member of the basic character set of the execution environment
2 NOTE 1 It is possible to express the address of each individual byte of an object uniquely.
3 NOTE 2 A byte is composed of a contiguous sequence of bits, the number of which is implementationdefined. The least significant bit is called the low-order bit; the most significant bit is called the high-order bit.
Why is that so? I thought the size of a byte is absolute fixed in the technology of information to be comprised of exactly 8 bits.
Why does the Standard makes this seemingly crazy statement?
Also:
If the byte size isn´t fixed, how we can talk about f.e. a char
to be comprised of 8 bits and an int
of 32 bits (4 Bytes), assuming 64-bit systems?