gcc version:gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
glibc version:glibc 2.28
#include <stdio.h>
int main (void)
{
int i = 2147000003;
int iplus = i+1000000; //-2146967293
printf ("i is %d\n", i);//2147000003
printf ("iplus is %d\n", iplus);//-2146967293
printf ("i+1000000 is %d\n", i+1000000);//-2146967293
printf ("iplus %s i\n", iplus < i ? "less than" : "not less than"); //less than
printf ("i+1000000 %s i\n", (i+1000000) < i ? "less than" : "not less than");//not less than
return 0;
}
execute:
i is 2147000003
iplus is -2146967293
i+1000000 is -2146967293
iplus less than i
i+1000000 not less than i
I think the variable iplus
is the same as i+1000000
, but it's not. Why?
I think i+1000000 less than i
but i+1000000 not less than i
.
See the comment area for the answer.
Just like the comments, the correct answer should be "the compiler optimizes the logic of i+a positive number<i to be constant false", because this is not true in mathematics. This question is not a duplicate question. This question does not explore the overflow problem, but a compiler optimization problem.