In my personal C++ library, I code to handle working on different endian systems (little or big) by working with individual bytes. For example, I calculate a 16bit unsigned integer as follows by using a pointer, that points to the beginning of the uint16 data, as a parameter for my function:
constexpr uint16_t decode16(const uint8_t* x) {
return uint16_t(x[1] + x[0] * 0x100);
}
As it is, this function doesn't work for a 16bit signed integer's byte data, and so 0xFF is decoded into -257. Can somebody please teach me how to write a similar function for signed integers that works? 0xFF should get decoded into -1.
Edit: This is my temporary solution which is a workaround for a little endian system, but won't be transportable onto a big endian system. What I'm looking for is how to code for both.
int16_t decode16(const int8_t* x) {
int8_t temp8[2];
temp8[0] = x[1];
temp8[1] = x[0];
int16_t temp16 = *(uint16_t *)&temp8[0];
return temp16;
}