1

As we all know, computers store floating point numbers on finite memory in an approximate way (see IEEE 754).

This results in weird behaviour like (here in Python, but it shouldn't matter much):

>>> 0.1 + 0.2
0.30000000000000004

Or

>>> 0.10000000000000001
0.1

However, when we print numbers with "a few" digits like 0.1, 0.2, 0.3, ... we never end with an approximation of the number (in my second example above, it's true the other way: 0.10000000000000001 renders as 0.1).

How does IEEE 754 (or Python, if this behavior is due to Python implem) achieves this?

Weier
  • 1,339
  • 1
  • 10
  • 20
  • Your question is unclear and seems to be based on a confusion between how IEEE 754 represents a number and how Python will format it when printing it or displaying it in the shell. – John Coleman Jun 14 '23 at 10:09
  • It's probably linked to the [rounding rules](https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules) of the IEEE 754 but i'm not sure myself of the exact process – Xiidref Jun 14 '23 at 10:22
  • @JohnColeman I cannot be clear on whether this is due to Python rendering or to some details of IEEE 754, because that's precisely the point of my question to ask where that behaviour comes from. – Weier Jun 14 '23 at 11:48
  • The convert-string-to-float and convert-float-to-string processes are reversable. If you start with the string `0.12`, that converts to a floating approximation that will be converted back to `0.12`. When you do arithmetic, that introduces rounding errors, so `0.1+0.2` results in a different binary value than converting `0.3` to a float. your `0.10000000000000001` gets truncated because there aren't enough bits to hold that final 1. It is lost. – Tim Roberts Jun 14 '23 at 19:33

1 Answers1

4

Python does not have a formal specification, but some Python implementations use an algorithm for converting floating-point numbers to strings that produces the decimal numeral with the fewest significant digits such that converting the decimal numeral back to floating-point yields the floating-point number.

When .1 is converted to IEEE-754 double precision (binary64), the result is the nearest value representable in binary64, 0.1000000000000000055511151231257827021181583404541015625. When converting that to a string, the fewest digits algorithm produces “0.1”, because .1 is the shortest decimal numeral such that converting it to binary64 produces 0.1000000000000000055511151231257827021181583404541015625.

So it is an inherent property of the fewest-digits algorithm that, if you start with a decimal numeral d with not “too many” digits to a floating-point number f, then the fewest-digits algorithm will produce the numeral d, aside from cosmetic differences like using an aesthetic leading 0 before “.1”.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • It should be said that the fewest-digits algorithm is not trivial and took some work to perfect. – Mark Ransom Jun 14 '23 at 20:04
  • Pretty clear answer, thanks! (For those interested, I found the relevant part in cPython: https://github.com/python/cpython/blob/820febc535cd9548b11c01c3c6d572b442f20c35/Python/pystrtod.c#L792-L797C24) – Weier Jun 14 '23 at 20:31