0

When study the python built-in float function, I read the floating point doc. And got some understanding.

  • Float's real value is different with their demonstration value, like 0.1's real value is '0.1000000000000000055511151231257827021181583404541015625'
  • Any float in python has a fixed value using IEEE-754
  • math.fsum give us the closest exactly representable value to the exact mathematical sum of the inputs

But after doing a bunch of experiments, I still encounter some unsolved doubts.

Doubt1

In the tutorial doc I mentioned in the first paragraph, it gave us an example:

>>> sum([0.1] * 10) == 1.0
False
>>> math.fsum([0.1] * 10) == 1.0
True

With the doc's instructions, I got an impression that math.fsum will give us a more accurate result when doing float summation.

But I found a special case within the range(20) where sum([0.1] * 12) == 1.2 evals True, meanwhile math.fsum([0.1] * 12) == 1.2 evals False. Which makes me perplexed.

Why this happened?
And what's the mechanism of sum when doing float summation?

Doubt2

I found for some float computation, plus operation has the same effect as its equivalent multiply operation. Such as 0.1+0.1+0.1+0.1+0.1 is equal to 0.1*5. But on some cases, there are not equivalent, like adding up 0.1 12 times is not equal to 0.1*12. This makes me really confused. As per float is a fixed value calculated by IEEE-754 standard. According to math principle, such kind of addition should be equal to its equivalent multiplication. The only explanation is that python didn't fully applied math principle here, some tricky stuff happens.

But what's the mechanism and details of this tricky stuff?

In [64]: z = 0
In [64]: z = 0

In [65]: 0.1*12 == 1.2
Out[65]: False

In [66]: for i in range(12):
    ...:     z += 0.1
    ...:

In [67]: z == 1.2
Out[67]: True


In [71]: 0.1*5 == 0.5
Out[71]: True

In [72]: z = 0

In [73]: for i in range(5):
    ...:     z += 0.1
    ...:

In [74]: z == 0.5
Out[74]: True
Zen
  • 4,381
  • 5
  • 29
  • 56
  • I think this may due to the floating point, take reference like this: https://stackoverflow.com/questions/2100490/floating-point-inaccuracy-examples, if you wanna avoid inaccuracy like this, you could try decimal data type. – Menglong Li Dec 12 '17 at 07:00
  • As you have already discovered floating numbers are inaccurate, so never expect `==` to work even if the math tells you it should, and even if sometime it does work ("by chance") – Julien Dec 12 '17 at 07:03
  • @Julien, yes, the thing which bothers me is that addition on floats doesn't equals to its equivalent multiplication. Which really confused me. Could you give me some hints why this happened. Was it because when each time doing the addition, the intermediate float result was rounded? Like when add 0.1 up 12 times, it was rounded 12times, meanwhile `0.1*12` only been rounded once? – Zen Dec 12 '17 at 07:08
  • @Zen there is "no" reason why multiplication should give the same result as its repeated addition "equivalent" because they follow 2 different paths / algorithms. While x*y "is" x+x+x... y times for integer values, this interpretation makes no sense for non integer floats, and as such that's not how the values are computed. – Julien Dec 12 '17 at 07:11
  • 2
    I probably shouldn't be doing this but check out something like `for i in range(100): for j in (0.1, math.pi, math.e): assert i*j==math.fsum(i*[j])` :-] – Paul Panzer Dec 12 '17 at 07:19
  • "addition on floats doesn't equals to its equivalent multiplication" you haven't observed that at all as @PaulPanzer shows. All you saw is that `12*0.1` is not the same as `1.2` – Julien Dec 12 '17 at 07:29
  • Well, actually, this little trick only works because [`fsum`](https://docs.python.org/3/library/math.html#math.fsum) happens to be [insanely](https://code.activestate.com/recipes/393090/) accurate. Don't expect that behavior from the normal `sum`. – Paul Panzer Dec 12 '17 at 07:40

1 Answers1

1

When .1 is converted to 64-bit binary IEEE-754 floating-point, the result is exactly 0.1000000000000000055511151231257827021181583404541015625. When you add this individually 12 times, various rounding errors occur during the additions, and the final sum is exactly 1.1999999999999999555910790149937383830547332763671875.

Coincidentally, when 1.2 is converted to floating-point, the result is also exactly 1.1999999999999999555910790149937383830547332763671875. This is a coincidence because some of the rounding errors in adding .1 rounded up and some rounded down, with the net result that 1.1999999999999999555910790149937383830547332763671875 was produced.

However, if .1 is converted to floating-point and then added 12 times using exact mathematics, the result is exactly 1.20000000000000006661338147750939242541790008544921875. Python’s math.fsum may produce this value internally, but it does not fit in 64-bit binary floating-point, so it is rounded to 1.20000000000000017763568394002504646778106689453125.

As you can see, the more accurate value 1.20000000000000017763568394002504646778106689453125 differs from the result of converting 1.2 directly to floating-point, 1.1999999999999999555910790149937383830547332763671875, so the comparison reports they are unequal.

In this answer, I step through several additions of .1 to examine the rounding errors in detail.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312