Welcome to the wonderful world of floating point, where nothing is as it seems :-)
Most likely the final value of x
is something like 1.0000000042
which, despite the fact it would be printed as 1.000
, is still compared as being greater than one.
If you modify your code thus, you'll see what I mean:
#include <stdio.h>
int main (void) {
int i;
double x;
for (i = 0, x = -1; x <= 1; x = x + 0.025)
printf("X = %.3f (%.20f)\n", x, x);
printf("X = %.3f (%.20f)\n", x, x);
return 0;
}
The lines of that output are:
X = -1.000 (-1.00000000000000000000)
X = -0.975 (-0.97499999999999997780)
X = -0.950 (-0.94999999999999995559)
X = -0.925 (-0.92499999999999993339)
X = -0.900 (-0.89999999999999991118)
: : : : :
X = 0.900 (0.90000000000000113243)
X = 0.925 (0.92500000000000115463)
X = 0.950 (0.95000000000000117684)
X = 0.975 (0.97500000000000119904)
X = 1.000 (1.00000000000000111022)
and you can see the inaccuracies creeping in pretty quickly.
You may also want to print out the value 0.025
with that %.20f
format specifier - you'll see something like:
0.02500000000000000139
because 25/1000
(or 1/40
) is one of those numbers which cannot be represented exactly in IEEE754 double precision.