I work almost exclusively with embedded systems where I rather often have to provide portable code between all manner of more or less exotic systems. Like writing code which will work both on some tiny 8 bit MCU and a x86_64.
But even for me, bothering with portability to exotic obsolete DSP systems and the like is a huge waste of time. These systems barely exist in the real world - why exactly do you need portability to them? Is there any other reason than "showing off" mostly useless language lawyer knowledge of C? In my experience, 99% of all such useless portability concerns boil down to programmers "showing off", rather than an actual requirement specification.
And even if you for some strange reason do need such portability, this task doesn't make any sense to begin with since neither char
nor long
are portable! If char
is not 8 bits then what makes you think long
is 4 bytes? It could be 2 bytes, it could be 8 bytes, or it could be something else.
If portability is an actual concern, then you must use stdint.h
. Then if you truly must support exotic systems, you have to decide which ones. The only real-world computers I know of that actually do use different byte sizes are various obsolete exotic TI DSPs from the 1990s, which use 16 bit bytes/char. Lets assume this is your intended target which you have decided is important to support.
Lets also assume that a standard C compiler (ISO 9899) exists for that exotic target, which is highly unlikely. (More likely you'll get a poorly conforming, mostly broken legacy C90 thing... or even more likely those who use the target write everything in assembler.) In case of a standard C compiler, it will not implement uint8_t
since it's not a mandatory type if the target doesn't support it. Only uint_least8_t
and uint_fast8_t
are mandatory.
Then you'd go about it like this:
#include <stdint.h>
#include <limits.h>
#if CHAR_BIT == 8
static void uint32_to_uint8 (uint8_t dst[4], uint32_t u32)
{
dst[0] = (u32 >> 24) & 0xFF;
dst[1] = (u32 >> 16) & 0xFF;
dst[2] = (u32 >> 8) & 0xFF;
dst[3] = (u32 >> 0) & 0xFF;
}
#endif
// whatever other conversion functions you need:
static void uint32_to_uint16 (uint16_t dst[2], uint32_t u32){ ... }
static void uint64_to_uint16 (uint16_t dst[2], uint32_t u32){ ... }
The exotic DSP will then use the uint32_to_uint16
function. You could use the same compiler #if CHAR_BIT
checks to do #define byte_to_word uint32_to_uint16
etc.
And then should also immediately notice that endianess will be the next major portability concern. I have no idea what endianess obsolete DSPs often use, but that's another question.