To correctly deal with endianness, you need to know two things: whether the data is big- or little-endian, and what size the given unit of data is.
This doesn't mean that you can't handle data of varying lengths. It does mean that you need to know what size data you're dealing with.
But this is true anyway. If you receive a series of bytes over the network (for example), you need to know how to interpret them. If you get 32 bytes, that could be text, it could be eight 32-bit integers, it could be four 64-bit integers, or whatever.
If you are expecting a 32-bit integer, then you need to handle endianness 32 bits (4 bytes) at a time. If you are expecting a 64-bit integer, then 8 bytes at a time. This has nothing to do with what a "word" is defined to be for your CPU architecture, or your language, or your managed run-time. It has everything to do with the protocol your code is dealing with.
Even within a given protocol, different pieces of data may be different sizes. You might have a mix of short
, int
, and long
, and you need to accommodate that.
It's exactly the same reason that e.g. BitConverter
or BinaryReader
consumes or generates a different number of bytes, depending on what type of data you are retrieving. It's just that instead of consuming or generating the different number of bytes, you reverse (or not, if the platform endianness matches the protocol) the different number of bytes.
In your example, if you passed to BitConverter.GetBytes()
a long
, then the overload the compiler selects would be the method that takes a long
instead of an int
and it would return eight bytes instead of four. And reversing the entire eight bytes would be the right thing to do.