Upon execution of nr = 4.2
, your Python set nr
to exactly 4.20000000000000017763568394002504646778106689453125. This is the value that results from converting 4.2 to a binary-based floating-point format.
The results shown for subsequent subtractions appear to vary in the low digits solely due to formatting decisions. The default formatting for floating-point numbers does not show all of the digits. Python is not strict about floating-point behavior, but I suspect your implementation may be showing just as many decimal digits as needed to uniquely distinguish the binary floating-point number.
For “4.2”, “3.2”, and “2.2”, this is just two significant digits, because these decimal numbers are near the actual binary floating-point value.
Near 1.2, the floating-point format has more resolution (because the value dropped under 2, meaning the exponent decreased, thus shifting the effective position of the significand lower, allowing it to include another bit of resolution on an absolute scale). In consequence, there happens to be another binary floating-point number near 1.2, so “ 1.2000000000000002” is shown to distinguish the number currently in nr
from this other number.
Near .2, there is even more resolution, and so there are even more binary floating-point numbers nearby, and more digits have to be used to distinguish the value.