0

I have a very simple code that takes a floating point number, and uses a while-loop to keep subtracting 1 until it reaches zero:

nr = 4.2
while nr > 0:
    print(nr)
    nr -= 1

I expected the output to look like this:

4.2
3.2
2.2
etc...

But instead, I get this:

4.2
3.2
2.2
1.2000000000000002
0.20000000000000018

Where do these weird floating numbers come from? Why does it only happen after the third time executing the loop? Also, very interestingly, this does not happen when the last decimal of nr is a 5.

What happened and how can I prevent this?

khelwood
  • 55,782
  • 14
  • 81
  • 108
Tijmen
  • 55
  • 4
  • 1
    Welcome to floating-point inaccuracy – Aoki Ahishatsu Feb 11 '19 at 12:16
  • Also, `.1 + .2 == 0.30000000000000004`, that's how floating-point math works – ForceBru Feb 11 '19 at 12:16
  • To prevent this, look at the `decimal` module which provides a different way to handle float values. – Michael Butscher Feb 11 '19 at 12:33
  • @PatrickHaugh: This question is not a duplicate of [that question](https://stackoverflow.com/questions/588004/is-floating-point-math-broken). The details of floating-point arithmetic would explain why the output were “4.2000…001776…”, “3.2000…001776…”, “2.2000…001776…”, and so on, if that were the output. But the output is “4.2”, “3.2”, “2.2”, and then ”1.2000000000000002”, “0.20000000000000018”. That is **not** caused by floating-point arithmetic but is caused by formatting decisions, and the reasons are not covered in that other question and its answers. – Eric Postpischil Feb 11 '19 at 17:22
  • @PatrickHaugh: A proper answer needs explanation of those formatting methods. By promiscuously closing questions as duplicates of that one, you are impairing the ability to convey knowledge about these issues. It promotes an attitude of just accepting floating-point behavior as uncontrollable and not completely understandable. It makes it harder for people to learn the details of these things. Please stop. – Eric Postpischil Feb 11 '19 at 17:24

1 Answers1

0

Upon execution of nr = 4.2, your Python set nr to exactly 4.20000000000000017763568394002504646778106689453125. This is the value that results from converting 4.2 to a binary-based floating-point format.

The results shown for subsequent subtractions appear to vary in the low digits solely due to formatting decisions. The default formatting for floating-point numbers does not show all of the digits. Python is not strict about floating-point behavior, but I suspect your implementation may be showing just as many decimal digits as needed to uniquely distinguish the binary floating-point number.

For “4.2”, “3.2”, and “2.2”, this is just two significant digits, because these decimal numbers are near the actual binary floating-point value.

Near 1.2, the floating-point format has more resolution (because the value dropped under 2, meaning the exponent decreased, thus shifting the effective position of the significand lower, allowing it to include another bit of resolution on an absolute scale). In consequence, there happens to be another binary floating-point number near 1.2, so “ 1.2000000000000002” is shown to distinguish the number currently in nr from this other number.

Near .2, there is even more resolution, and so there are even more binary floating-point numbers nearby, and more digits have to be used to distinguish the value.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312