From a C perspective:
Much discussion here omits that a uint8_t
applied to a shift (left or right) is first promoted to an int
, and then the shift rules are applied.
Same occurs with uint16_t
when int
is 32-bit. (17 bit or more)
When int
is 32-bit
hword0 << 32
is UB due to the shift amount too great: outside 0 to 31.
byte3 << 24
is UB when attempting to shift into the sign bit. byte3 & 0x80
is true.
Other shifts are OK.
Had int
been 64-bit, OP's original code is fine - no UB, including hword0 << 32
.
Had int
been 16-bit, all of code's shifts (aside from << 0
) are UB or potential UB.
To do this, without casting (Something I try to avoid), consider
// uint64_t result = (hword0 << 32) + (byte3 << 24) + (byte2 << 16) + (byte1 << 8) + byte0
// Let an optimizing compiler do its job
uint64_t result = hword0;
result <<= 8;
result += byte3;
result <<= 8;
result += byte2;
result <<= 8;
result += byte1;
result <<= 8;
result += byte0;
Or
uint64_t result = (1ull*hword0 << 32) + (1ul*byte3 << 24) + (1ul*byte2 << 16) +
(1u*byte1 << 8) + byte0;