0

Just have a look on the outcome of this two similar expressions:

printf("%f",1.2f*100000000);

printf("%f",1.2f*10000000);

It will result in:

120000008.000000

12000000.000000

Why is it not equal?

Light
  • 1,206
  • 1
  • 9
  • 16
Henrik
  • 1
  • 2
  • 1
    Why should it be? – Asteroids With Wings Oct 02 '20 at 13:46
  • 1
    You just discovered the precision limit in floating point numbers. ;) Don't rely too much in the value of the decimals you see: for big numbers they are meaningless but are displayed anyway because the default is 6 decimals. – Roberto Caboni Oct 02 '20 at 13:54
  • `1.2f` is a `float` (usually 4 bytes long with very limited precision). Use `1.2 * 100000000` (remove the trailing `f`), then `1.2` will be a `double` (usually 8 bytes long) and the precision will be much better. – Jabberwocky Oct 02 '20 at 14:00
  • Also read this very interesting article: https://stackoverflow.com/questions/588004/is-floating-point-math-broken – Jabberwocky Oct 02 '20 at 14:01
  • You could also make the number of digits adapt to the actual precision, with `%g`. See https://stackoverflow.com/a/30658980/11336762 – Roberto Caboni Oct 02 '20 at 14:09
  • I'm not sure that this is a duplicate. This question demands a specific answer, and it has nothing to do with what's representable exactly as float. Here we have a trivial issue of precision, and the supposed duplicate answer doesn't clearly cover that. It's disingenuous to close this one I think since it's ridiculous to expect anyone new to read the "deduplicated" answer and divine what the real problem was. The answer is much simpler than that! – Kuba hasn't forgotten Monica Oct 02 '20 at 14:15
  • I agree with @UnslanderMonica. I'm voting to reopen. – Roberto Caboni Oct 02 '20 at 14:17

1 Answers1

1

You're doing your arithmetic in single-precision floating point. The mantissa in such a number (which encodes the significant digits you get) is about 7 digits at most -- most people will choose to only rely on six because there's also loss of precision from doing floating point operations on numbers. In some cases, with lots of calculations folded into a result, the imprecision is even higher.

Floating point numbers are stored base-2, and some simple decimal numbers are not exactly representable as finite binary fractions. For example, 0.1 base 10 looks like this as base 2: 0.000110011001100110011... where the trailing "0011" repeats forever. Similarly, 1.2 looks like 1.001100110011...

So the simplest of base-10 fractions, 0.1, has an infinite binary representation.

You usually don't notice: output routines typically round to precision that hides this kind of thing, even with the "%f" scan conversion.

So what's happened here is that you've printed enough precision on this number to exhaust that mantissa. You can do it with 1.2 as well. Try this slightly different program:

 #include <stdio.h>

  int main() {
     float f0, f1, f2, f3, f4;
     f0 = 1.2f;
     f1 = 1.2f*100000000;
     f2 = 1.2f*10000000;
     f3 = 120000000.0f;
     f4 = 12000000.0f;

     printf("f0:  %.10f\n",f0);
     printf("f1:  %f\n",f1);
     printf("f2:  %f\n",f2);
     printf("f3:  %f\n",f3);
     printf("f4:  %f\n",f4);

     return 0;
 }

The output on my machine is:

 f0:  1.2000000477
 f1:  120000008.000000
 f2:  12000000.000000
 f3:  120000000.000000
 f4:  12000000.000000

Print enough decimal places, and you'll find the limits of precision with smaller numbers as well, as demonstrated by the first line of output. What you really ran into was the fact that you multiplied by a large enough number so that the precision was exhausted while printing the integer portion of the number.

Note that just writing down the product doesn't have this problem (the output for f3 in the above example). The imprecision comes from multiplying the inexact representation of 1.2 by a large enough number that the imprecision is visible in the integer part of the number.

Remember: typical floating point calculations on computers are operating on finite approximations of real numbers, not actual real numbers, and that finitude inevitably begets imprecision, and choosing the correct precision-versus-speed trade-off is something that's generally important for any interesting calculation.

There are also libraries that don't use the usual IEEE floats at all and represent numbers more as arrays of digits to be manipulated. They tend to be used when very large or very small magnitude numbers need to be manipulated with exact precision no matter what the cost in CPU time. That's why if you run the old UN*X utilities dc or bc, for example, you don't see this issue.

Thomas Kammeyer
  • 4,457
  • 21
  • 30
  • I'd get rid of everything that deals with exact representations because they are totally not an issue here and just muddy the water. The long and the short of everything is that a decimal representation of `1.2f` has arbitrary digits after 6.9th position. – Kuba hasn't forgotten Monica Oct 02 '20 at 14:16
  • Sorry, but I disagree. Whether the representation is exact is precisely why using a finite number of bits to represent a number falls down. That's a key part of understand why you lose precision. It's not like any arithmetic operations that loose precision are being done in this example, so the issue _is_ one of representation. – Thomas Kammeyer Oct 02 '20 at 14:25
  • Is there some reason why "Hope this helps." was removed from the end of my post? That seems like overly aggressive editing. I mean, it was only three words at the end and removing it didn't so much arguably improve the post as make it fit what I take to be someone else's idiosyncratic feelings about whether it was appropriate. Honestly, when this stuff happens it makes me feel like there are word police on SO and makes me less inclined to post answers. – Thomas Kammeyer Oct 02 '20 at 14:29
  • @Thomas Edit history has the reason for the removal: *"Stack Overflow is like an encyclopedia, so we prefer to omit these types of phrases. It is assumed that everyone here is trying to be helpful."* This is perfectly normal in SO and encouraged because it removes noise from the posts. – user694733 Oct 02 '20 at 14:40
  • Thanks for explaining @user694733, I missed the explanation in the edit history. That makes sense now. But I have to say this: an encyclopedia is not a community, and in small ways the ends of the two will be at odds. OK, that's of course merely my opinion and I'll certainly respect the site policy. I suppose if I was worked up about it enough, I could always raise it on meta some time. Thanks again. – Thomas Kammeyer Oct 02 '20 at 14:41
  • 1
    Re “It's not like any arithmetic operations that loose precision are being done in this example”: What do you think `*100000000` and `*10000000` are? – Eric Postpischil Oct 02 '20 at 14:44
  • My bad, those are, of course, FLOPs, and it's even more or less irrelevant that they're possibly done at compile time. I was going to say the issue was still representational until I just ran the same program with the two multiplications removed and the products plugged in for them literally. The issue goes away because, as you're pointing out, the loss of precision in representing 1.2 is amplified by the multiplication by a large number. I'll go edit my answer. – Thomas Kammeyer Oct 02 '20 at 14:50