Java is Big-Endian; Network stack is Big-Endian; Intel/AMD (basically all of our computers) and ARM CPU's (most common Android and iOS chips) are all little-endian as well.
Given all of that, if I am allocating a direct ByteBuffer for different uses, is it a good idea to always try and match the endian-ness of the native interaction?
More specifically:
- Network Buffer: Leave it Big-Endian.
- File Buffer (on x86): Little-Endian.
- OpenGL/Native Process Buffer: Little-Endian.
and so on...
I am asking this because I have never thought about the Endian-ness of my ByteBuffers, but after seeing some other questions on SO and the performance impact it can have, it seems worth it or at least something I should become more aware of when utilizing ByteBuffers.
Or maybe there is a down-side here to worrying about endian-ness I am missing and would like to be aware of?