Is the following true for all float numbers r
from 0 to 1 in CPython 3.10 or newer that uses IEEE-754 floating-point numbers?
float('%.17g' % r) == r
In other words, is 17-digit decimal representation of float numbers from 0 to 1 accurate enough to be equal to their binary counterparts?
My conjecture is that it is true for 17 digits (but not 16), as I did not find a counterexample yet with the following program:
import random
def check(precision=17):
r = random.random()
d = float(f'%.{precision}g' % r) - r
if d != 0:
print(d)
for i in range(1000000000):
check(17)
However, this does not cover all cases, so I am searching for an answer grounded in theory.
Depending on the answer, I am searching for the answer to the following quesitons:
- Would it be true for all float numbers?
- If the conjecture is not true for 17 digits, is it true for more?
- If the conjecture is not true, is at least each
float('%.17f' % r)
unique?