As we all know, computers store floating point numbers on finite memory in an approximate way (see IEEE 754).
This results in weird behaviour like (here in Python, but it shouldn't matter much):
>>> 0.1 + 0.2
0.30000000000000004
Or
>>> 0.10000000000000001
0.1
However, when we print numbers with "a few" digits like 0.1
, 0.2
, 0.3
, ... we never end with an approximation of the number (in my second example above, it's true the other way: 0.10000000000000001
renders as 0.1
).
How does IEEE 754 (or Python, if this behavior is due to Python implem) achieves this?