Double-precision (IEEE 754): 53-bit significand precision gives from 15 to 17 significant decimal digits precision (wikipedia).
Single-precision (IEEE 754): This gives from 6 to 9 significant decimal digits precision. All integers with 6 or fewer significant decimal digits can be converted to an IEEE 754 floating-point value without loss of precision (wikipedia)
My guess is that by "decimal digits" wikipedia means all 37456 in 37.456
I understand it that lower bound (6 decimal digits for float and 15 for double) is always exact, and sometimes (when?) I can rely even on up to 9 (float) and 17 (double) decimal digits inclusive. By decimal digits I mean 456 in 37.456
But simple test-snippet shows that even 6th decimal digit lies:
System.out.println(90f == 90.000_001f); // true. Problem with 6th digit !!!
System.out.println(90f == 90.000_000_000_001f); // true - as expected
System.out.println(90f == 90.000_01f); // false. 5th digit still "exact" - as expected
I think 90.000_001 becomes 9.000_000_1 (mantissa in base10) * 10 (E1 exponent) in internal representation (because 1 digit before decimal point must be used), so only 7th digit becomes inexact, meaning 6 digits is a guarantee (if you convert 90.000_001 to internal representation). But why wiki speaks about [6-9] range?
Besides, as I understand, maximum possible precision (number of representable decimal digits on the right after decimal point) is defined by simple math as integer part of log10(2^numberOfBitsForMantissa), but this calculation gives 6 (for float) and 15 (for double) - I don't include 1 implicit bit (for "integer" part) in numberOfBitsForMantissa (well, I don't know, is it allowed to use this bit with negative exponent like this: 1.11 * E-1 = 0.111). But this +1 bit anyway puts precision limit for float at 7 decimal digits max (unlike [6-9] in wiki).
So what number of precision digits shall be exact (see my code snippet) and why I see differences between these three:
- my code snippet: up to 6 decimal digits (inclusive) for float
- wiki: [6 - 9] for float, [15 - 17] for double
- log10(2^numberOfBitsForMantissa) formula, giving max 6 (for float) and max 15 (for double)
My guess is that if I convert number like 99.000_01 to internal representation like 9.9_000_01 then only 6 digits (float) and 15 (double) to the left of decimal point (.) are stored in mantissa in this INTERNAL representation (in 9.9_000_01). If I have a number x < 1 (for example 0.345_567_8, then (and only then) it can be represented as 3.455_678 * E-1 and in this case 7 digits (float) and again 15 digits (double) can be stored - lg(2^53) and lg(2^52) both have 15 as integer part. Why wikipedia writes different - I have no clue!
My research shows that there is no clear answer to my question: this (C++), this (C#), this (C++)
This (C++) states the following:
for an IEEE754 single precision floating point, the closest number to
9999990000
is
9999989760
What is guaranteed is that your number and the float, when both are rounded to six significant figures, will be the same.