If you can be somewhat realistic and knowledgeable about your target platforms, you can fairly easily find some common ground and use that as the basis for your structures.
For example, if your target platforms are Windows and Linux running on current x86 hardware, then you know a few things already:
char
is exactly 1 byte per the Standard
- 1 byte is 8 bits
int8_t
and uint8_t
are exactly 8 bits under C++11 (most C++03 compilers provide these anyway)
int16_t
and uint16_t
are exactly 16 bits under C++11
int32_t
and uint32_t
are exactly 32 bits under C++11
int64_t
and uint64_t
are exactly 64 bits under C++11
- C-style ASCII strings are NULL-terminated
char
arrays
- Preprocessor directives are available to take control over packing
Endianness will still be an issue, so you will have to deal with this when converting multi-byte types.
Trying to devise a structure which will be guaranteed by the Standard to have the same binary representation on the wire on all exotic platforms is impossible. You can get close by using only char
s, but even then there is no guarantee because you don't know how many bits are in a byte.
You could try to use bitfields to represent your data, but you still don't know how many bits are in a byte so you can't be certian how much padding will be added at the end.
Portability serves a purpose: portable code is easier to maintain and extend than platform-specific code. Pursuing portability for portability's sake is an academic goal that has no place in professional programming. Pursuing portability for maintainability's sake on the other hand is a laudable goal, but it's also a balancing act. If you come up with a completely portable solution that is works on all possible platforms, but you will only run on two of those platforms and your code is impossible to maintain, then what is the point?