1

Let us consider the following piece of code:

#include <stdio.h>
int main()
{
    float x = 0.33;
    printf("%.100f",x);

    return 0;
}

If float has 6 digits of precision, how is it possible to display more than 6 digits with printf?

phuclv
  • 37,963
  • 15
  • 156
  • 475
Zaratruta
  • 2,097
  • 2
  • 20
  • 26
  • 5
    A float doesn't have digits, it has bits. The bits it has are equivalent to *approximately* 7 (not 6) decimal digits. But when you convert (say) 24 bits to decimal, you can get up to 24 digits. That's what you're seeing, I'll bet. – Steve Summit May 17 '23 at 22:12
  • 2
    https://en.wikipedia.org/wiki/IEEE_754 <- it's in there – Ted Lyngmo May 17 '23 at 22:15
  • The *significant* digits could be located far to the right of the decimal point. – nielsen May 17 '23 at 22:15
  • 3
    Take the number 0.123456789. Convert it to float. You get 0.12345679104328…. Seven of the digits match, the rest do not. So `float` can accurately represent about 7 of the digits of the number you feed in to it. But then (since it's binary internally, not decimal), the digits it can't represent (the digits past 7) are apparently random, they're not nice, clean 0's or anything. – Steve Summit May 17 '23 at 22:15
  • 1
    Your program prints `0.3300000131130218505859375000…`. That's 7 digits accurately representing the number you fed in. There are 25 nonzero digits in total, which is one more than the 24 bits of significance which a `float` contains. (I'm not sure why it's 1 more.) But those 25 digits are a perfectly accurate representation of the 24-bit significand of the actual `float` value. After that you *do* get nice, clean 0's, out to the 100 digits you asked for, because after the first 25, there's nothing more to represent. – Steve Summit May 17 '23 at 22:22
  • [Related.](https://stackoverflow.com/questions/61609276/how-to-calculate-float-type-precision-and-does-it-make-sense/61614323#61614323) – Eric Postpischil May 18 '23 at 11:54

3 Answers3

6

You tried to convert the decimal fraction 0.33 to a float. But, like most decimal fractions, the number 0.33 cannot be represented exactly in the binary representation used internally by type float. The closest you can get is the binary fraction 0.0101010001111010111000011. That fraction, if we convert it back to decimal, is exactly 0.3300000131130218505859375.

In decimal, if I tell you that you have 7 digits worth of significance, and you try to represent the number 1/3 = 0.333…, you expect to get 0.333333300000. That is, you expect to get some number of significant digits matching your original number, followed by 0's where there wasn't enough significance. And binary fractions work the same way: for type float, the binary fraction always has exactly 24 bits of significance, followed (if you like) by any number of binary 0's.

When we convert that binary number back to decimal, we get approximately 7 digits matching the decimal number we thought we had, followed not by zeroes, but rather, by what look like random digits. For example, 1/3 as a binary float is 0.0101010101010101010101011000000000 (note 24 significant bits), which when converted to decimal is 0.333333343267440795898437500000 (note 7 accurate digits).

When you hear that type float has approximately 7 digits of significance, that does not mean you'll get 7 digits of your original number, followed by 0's. What it means is that you'll get approximately 7 digits of your original number (but maybe 6, or maybe 8 or 9 or more), followed by some digits which probably don't match your original number but which aren't all 0, either. But that's not actually a problem, especially if (as is recommended and proper) you print this number back out rounded to a useful number of digits. When it can be a problem (though this comes up a lot) is when you print the number back out with a non-useful number of digits, with a format like %.100f, and you see some strange-looking digits which aren't all 0, and this perplexes you, as it did here.

The fact that types float and double use a binary representation internally leads to endless surprises like this. It's not surprising that the representation is binary (we all know computers do everything in binary), but the inability of binary fractions to accurately represent the decimal fractions we're used to, now that's really surprising. See the canonical SO question Is floating point math broken? for more on this.

Steve Summit
  • 45,437
  • 7
  • 70
  • 103
2

"If float has 6 digits of precision ..." --> is a weak premise.

Common float' does not have 6 digits of decimal precision, but 24 digits of binary precision.


how is it possible to display more than 6 digits with printf(?)

When printing a binary floating point number in decimal, each binary digit contributes some power-of-2 like ..., 16, 8, 4, 2, 1, 0.5, 0.25, 0.125, 0.0625, ...

The sum of those powers-of-2 can readily exceed 6 decimal digits.

In the extreme, FLT_TRUE_MIN often has the exact value of:

0.00000000000000000000000000000000000000000000140129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125

Rarely are more the 9 significant decimal digits important.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
0

If float has 6 digits of precision, why can we display more than 6 digits of floats with printf?

A float doesn't have 6 digits of precision per definition. You've opted to display more digits than the implementation can possibly provide - and it provides.

why can we display more than 6 digits of floats with printf?

You can tell the program to display whatever you have in a float/double/long double and it's still expected to be an approximation.

Displaying the current content of such a variable is best done when debugging.

Comparing: https://en.wikipedia.org/wiki/IEEE_754

Ted Lyngmo
  • 93,841
  • 5
  • 60
  • 108