I need to take 2 unsigned 8-bit values and subtract them, then add this value to a 32-bit accumulator. The 8-bit subtraction may underflow, and that's ok (unsigned int underflow is defined behavior, so no problems there).
I would expect that static_cast<uint32_t>(foo - bar)
should do what I want (where foo
and bar
are both uint8_t
). But it would appear that this casts them first and then performs a 32-bit subtraction, whereas I need it to underflow as an 8-bit variable. I know I could just mod 256, but I'm trying to figure out why it works this way.
Example here: https://ideone.com/TwOmTO
uint8_t foo = 5;
uint8_t bar = 250;
uint8_t diff8bit = foo - bar;
uint32_t diff1 = static_cast<uint32_t>(diff8bit);
uint32_t diff2 = static_cast<uint32_t>(foo) - static_cast<uint32_t>(bar);
uint32_t diff3 = static_cast<uint32_t>(foo - bar);
printf("diff1 = %u\n", diff1);
printf("diff2 = %u\n", diff2);
printf("diff3 = %u\n", diff3);
Output:
diff1 = 11
diff2 = 4294967051
diff3 = 4294967051
I would suspect diff3
would have the same behavior as diff1
, but it's actually the same as diff2
.
So why does this happen? As far as I can tell the compiler should be subtracting the two 8-bit values and then casting to 32-bit, but that's clearly not the case. Is this something to do with the specification of how static_cast
behaves on an expression?