Just have a look on the outcome of this two similar expressions:
printf("%f",1.2f*100000000);
printf("%f",1.2f*10000000);
It will result in:
120000008.000000
12000000.000000
Why is it not equal?
Just have a look on the outcome of this two similar expressions:
printf("%f",1.2f*100000000);
printf("%f",1.2f*10000000);
It will result in:
120000008.000000
12000000.000000
Why is it not equal?
You're doing your arithmetic in single-precision floating point. The mantissa in such a number (which encodes the significant digits you get) is about 7 digits at most -- most people will choose to only rely on six because there's also loss of precision from doing floating point operations on numbers. In some cases, with lots of calculations folded into a result, the imprecision is even higher.
Floating point numbers are stored base-2, and some simple decimal numbers are not exactly representable as finite binary fractions. For example, 0.1 base 10 looks like this as base 2: 0.000110011001100110011... where the trailing "0011" repeats forever. Similarly, 1.2 looks like 1.001100110011...
So the simplest of base-10 fractions, 0.1, has an infinite binary representation.
You usually don't notice: output routines typically round to precision that hides this kind of thing, even with the "%f" scan conversion.
So what's happened here is that you've printed enough precision on this number to exhaust that mantissa. You can do it with 1.2 as well. Try this slightly different program:
#include <stdio.h>
int main() {
float f0, f1, f2, f3, f4;
f0 = 1.2f;
f1 = 1.2f*100000000;
f2 = 1.2f*10000000;
f3 = 120000000.0f;
f4 = 12000000.0f;
printf("f0: %.10f\n",f0);
printf("f1: %f\n",f1);
printf("f2: %f\n",f2);
printf("f3: %f\n",f3);
printf("f4: %f\n",f4);
return 0;
}
The output on my machine is:
f0: 1.2000000477
f1: 120000008.000000
f2: 12000000.000000
f3: 120000000.000000
f4: 12000000.000000
Print enough decimal places, and you'll find the limits of precision with smaller numbers as well, as demonstrated by the first line of output. What you really ran into was the fact that you multiplied by a large enough number so that the precision was exhausted while printing the integer portion of the number.
Note that just writing down the product doesn't have this problem (the output for f3 in the above example). The imprecision comes from multiplying the inexact representation of 1.2 by a large enough number that the imprecision is visible in the integer part of the number.
Remember: typical floating point calculations on computers are operating on finite approximations of real numbers, not actual real numbers, and that finitude inevitably begets imprecision, and choosing the correct precision-versus-speed trade-off is something that's generally important for any interesting calculation.
There are also libraries that don't use the usual IEEE floats at all and represent numbers more as arrays of digits to be manipulated. They tend to be used when very large or very small magnitude numbers need to be manipulated with exact precision no matter what the cost in CPU time. That's why if you run the old UN*X utilities dc or bc, for example, you don't see this issue.