When study the python built-in float function, I read the floating point doc. And got some understanding.
- Float's real value is different with their demonstration value, like
0.1
's real value is'0.1000000000000000055511151231257827021181583404541015625'
- Any float in python has a fixed value using IEEE-754
- math.fsum give us the closest exactly representable value to the exact mathematical sum of the inputs
But after doing a bunch of experiments, I still encounter some unsolved doubts.
Doubt1
In the tutorial doc I mentioned in the first paragraph, it gave us an example:
>>> sum([0.1] * 10) == 1.0
False
>>> math.fsum([0.1] * 10) == 1.0
True
With the doc's instructions, I got an impression that math.fsum
will give us a more accurate result when doing float summation.
But I found a special case within the range(20)
where sum([0.1] * 12) == 1.2
evals True, meanwhile math.fsum([0.1] * 12) == 1.2
evals False. Which makes me perplexed.
Why this happened?
And what's the mechanism of sum
when doing float summation?
Doubt2
I found for some float computation, plus operation has the same effect as its equivalent multiply operation. Such as 0.1+0.1+0.1+0.1+0.1
is equal to 0.1*5
. But on some cases, there are not equivalent, like adding up 0.1
12 times is not equal to 0.1*12
. This makes me really confused. As per float is a fixed value calculated by IEEE-754 standard. According to math principle, such kind of addition should be equal to its equivalent multiplication. The only explanation is that python didn't fully applied math principle here, some tricky stuff happens.
But what's the mechanism and details of this tricky stuff?
In [64]: z = 0
In [64]: z = 0
In [65]: 0.1*12 == 1.2
Out[65]: False
In [66]: for i in range(12):
...: z += 0.1
...:
In [67]: z == 1.2
Out[67]: True
In [71]: 0.1*5 == 0.5
Out[71]: True
In [72]: z = 0
In [73]: for i in range(5):
...: z += 0.1
...:
In [74]: z == 0.5
Out[74]: True