I recently saw this post about endianness macros in C and I can't really wrap my head around the first answer.
Code supporting arbitrary byte orders, ready to be put into a file called order32.h:
#ifndef ORDER32_H
#define ORDER32_H
#include <limits.h>
#include <stdint.h>
#if CHAR_BIT != 8
#error "unsupported char size"
#endif
enum
{
O32_LITTLE_ENDIAN = 0x03020100ul,
O32_BIG_ENDIAN = 0x00010203ul,
O32_PDP_ENDIAN = 0x01000302ul
};
static const union { unsigned char bytes[4]; uint32_t value; } o32_host_order =
{ { 0, 1, 2, 3 } };
#define O32_HOST_ORDER (o32_host_order.value)
#endif
You would check for little endian systems via
O32_HOST_ORDER == O32_LITTLE_ENDIAN
I do understand endianness in general. This is how I understand the code:
- Create example of little, middle and big endianness.
- Compare test case to examples of little, middle and big endianness and decide what type the host machine is of.
What I don't understand are the following aspects:
- Why is an union needed to store the test-case? Isn't
uint32_t
guaranteed to be able to hold 32 bits/4 bytes as needed? And what does the assignment{ { 0, 1, 2, 3 } }
mean? It assigns the value to the union, but why the strange markup with two braces? - Why the check for
CHAR_BIT
? One comment mentions that it would be more useful to checkUINT8_MAX
? Why ischar
even used here, when it's not guaranteed to be 8 bits wide? Why not just useuint8_t
? I found this link to Google-Devs github. They don't rely on this check... Could someone please elaborate?