In many computer language I'm familiar with an expression such as
>>> 1.0 - ((1.0/3.0)*3.0)
0.0
will evaluate to a number close to 0.0 but not exactly to 0.0. In Python it seems to evaluate exactly to 0.0. How does this work?
>>> 0.0 == (1.0 - ((1.0/3.0)*3.0))
True
>>> 0.0 == (1.0 - ((1.0/10.0)*10.0))
True
>>> 1.0 - (0.1 * 10)
0.0
>>> 0.0 == (1.0 - (0.1 * 10))
True
When I look into the Python documentation, I don't see this example explicitly, but it seems to imply that, for example 0.1 * 10 would not equal exactly 1. In fact it says that 0.1 is approximately 0.1000000000000000055511151231257827021181583404541015625
It would be great if someone could explain to me what's happening here.
By the way, the post is sort of the opposite of what I'm asking. That article asks why floating point computations are INACCURATE. I'm asking, rather, why are floating point computations surprisingly/magically ACCURATE?