As an expansion to the question "Is unsigned integer subtraction defined behavior?", I am confused about the following behavior.
In the code below, note that A = 50
and B = 100
are stored as unsigned 16-bit integers and the subtraction A - B = -50 = 65486 (mod 2^16 - 1)
. If I store the result of the subtraction in D
(an unsigned 16-bit integer) and then evaluate D > 4000
I get true
since 65486 > 4000.
That makes sense.
If I forgo storing A - B
and evaluate A - B > 4000
directly I get false. This seem inconsistent. Is this the expected result? Why? Is this always the correct behavior or am I in the land of "undefined behavior".
#include <stdio.h>
#include <stdint.h>
int main() {
uint16_t A = 50;
uint16_t B = 100;
uint16_t D = A - B; // D = 65486
printf("D = %u\n", D);
int R = D > 4000; // R = 1 (true)
printf("R = %d\n", R);
int S = A - B > 4000; // S = 0 (false)
printf("S = %d\n", S);
return 0;
}
BTW, this behavior seems to contradict the behavior in the code from this question, which further confuses me. If I change uint16_t
to uint32_t
above than I get
D = 4294967246
R = 1
S = 1
which seems correct to me.
Update: It seems the best detailed answer is that uint16_t
gets promoted to a int
(int
is 32-bit on my system) and so A - B > 4000
is done with signed arithmetic. Whereas when I switch to uint32_t
no promotion is performed (already 32-bits wide) and so A - B > 4000
is done with unsigned arithmetic. This would explain it.
P.S. I know folks want to be first to answer, but just saying "integer promotion" is not a useful answer.