Your understanding of endianness appears to be correct.
I would like to additionally point out the implicit, conventional nature of endianness and its role in interpreting a byte sequence as some intended value.
0x12345678
in big endian is 0x12 0x34 0x56 0x78
and 0x78 0x56 0x34 0x12
in little endian.
Interestingly, you did not explicitly state what these 0x…
entities above are supposed to mean. Most programmers who are familiar with a C-style language are likely to interpret 0x12345678
as a numeric value presented in hexadecimal form, and both 0x12 0x34 0x56 0x78
and 0x78 0x56 0x34 0x12
as byte sequences (where each byte is presented in hexadecimal form, and the left-most byte is located at the lowest memory address). And that is probably exactly what you meant.
Perhaps without even thinking, you have relied on a well-known convention (i.e. the assumption that your target audience will apply the same common knowledge as you would) to convey the meaning of these 0x…
entities.
Endianness is very similar to this: a rule that defines for a given computer architecture, data transmission protocol, file format, etc. how to convert between a value and its representation as a byte sequence. Endianness is usually implied: Just as you did not have to explicitly tell us what you meant by 0x12345678
, usually it is not necessary to accompany each byte sequence such as 0x12 0x34 0x56 0x78
with explicit instructions how to convert it back to a multi-byte value, because that knowledge (the endianness) is built into, or defined in, a specific computer architecture, file format, data transmission protocol, etc.
As to when endianness is necessary: Basically for all data types whose values don't fit in a single byte. That's because computer memory is conceptually a linear array of slots, each of which has a capacity of 8 bits (an octet, or byte). Values of data types whose representation requires more than 8 bits must therefore be spread out over several slots; and that's where the importance of the byte order comes in.
P.S.: Studying the Unicode character encodings UTF-16 and UTF-8 helped me build a deeper understanding of endianness.
While both encodings are for the exact same kind of data, endianness only plays a role in UTF-16, but not in UTF-8. How can that be?
UTF-16 requires a byte order mark (BOM), while UTF-8 doesn't. Why?
Once you understand the reasons, chances are you'll have a very good understanding of endianness issues.