2

I'm new to python, and I'm trying to understand the floating point approximation and how floats are represented in Python.

For example:

>>> .1 + .1 + .1 == .3
False
>>> .25 + .25 + .25 == 0.75
True

I understand these two situations but what about these specific situations.

>>> .1 + .1 + .1 +.1 == .4
True
>>> .1 + .1 == .2
True

Is it coincidently just because the values of .1+.1+.1+.1 and .1+.1 are equal to .4 and .2 respectively even if these numbers are not correctly represented in Python? Are there any other situations like this or is there any way to identify them?

Thank you!

  • Most fractions are rounded slightly differently when converting between binary and decimal. Sometimes, when you round to binary, then do arithmetic, then round back to decimal, the roundings add up, and you can see an error (or detect an inaccuracy, such as `==` yielding false). Other times, the roundings cancel out, and you get the exact result you expected. Another example is: `1. / 10` is not equal to 0.1, but `1. / 10 * 10` is exactly equal to `1`. (Or, perhaps, that much you knew already, in which case I'm sorry, and I'm not trying to insult your intelligence.) – Steve Summit Jan 31 '23 at 23:41
  • 1
    That has nothing to do with python. It's just regular IEEE-754 floating point arithmetic. Python uses what the platform provides. See [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) To be precise: Python `floats` are IEEE-754 `doubles` – Homer512 Jan 31 '23 at 23:54

2 Answers2

0

Short answer: Yes its just a coincidence.

Numbers are represented as 64 bit IEEE floating point numbers in Python, also called double-precision.

https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats

When you write 0.3 Python finds the closest IEEE number that represents 0.3.

When adding multiple numbers these small errors in the last digits accumulate and you end up with a different number. Sometimes that happens sooner, other times later. Sometimes those errors counter-act, often not.

This answer is a good read:

Is floating point math broken?

To go deeper into your examples, you would need to look at the bit representation of these numbers. However it gets complicated, as one also need to look at how rounding and addition works ...

Chris
  • 2,461
  • 1
  • 23
  • 29
0

Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction 0.125 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.001 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.

Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.

One illusion may beget another. For example, since 0.1 is not exactly 1/10, summing three values of 0.1 may not yield exactly 0.3, either:

>>>.1 + .1 + .1 == .3
False

Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with round() function cannot help:

>>>round(.1, 1) + round(.1, 1) + round(.1, 1) == round(.3, 1)
False

Though the numbers cannot be made closer to their intended exact values, the round() function can be useful for post-rounding so that results with inexact values become comparable to one another:

>>>round(.1 + .1 + .1, 10) == round(.3, 10)
True