While testing mpi4py's comm.reduce()
and comm.Reduce()
methods in python 2.7.3 I encountered the following behaviour:
sometimes subtracting two complex numbers (type 'numpy.complex128', which are the output of some parallel calculation) that appear identical when printed on the screen produces a non-zero result
comparing them with
==
occasionally yieldsFalse
.
Example:
print z1, z2, z1-z2
(0.268870295763-0.268490433604j) (0.268870295763-0.268490433604j) 0j
print z1 == z2
True
but then
print z1, z2, z1-z2
(0.226804302192-0.242683516175j) (0.226804302192-0.242683516175j) (-2.77555756156e-17+5.55111512313e-17j)
print z1 == z2
False
I figured this had something to do with the finite precision of floats, so I resorted to just checking whether the difference abs(z1-z2)
is bigger than 1e-16 (it never was - which is what one would expect if reduce()
and Reduce()
are equivalent). (EDIT: this is actually not a good way to check for equality. See here: What is the best way to compare floats for almost-equality in Python?)
I was wondering if there's a more straightforward way to compare complex numbers in python.
Also, why does this behaviour arise? After all, a float (and as far as I know a complex is basically a tuple of two floats) is stored on the machine in binary, as a sequence of bits. Isn't it true that if the two numbers are represented by the same sequence in binary, the difference should be zero and the comparison with ==
should yield True
?
EDIT: OK, I found this What is the best way to compare floats for almost-equality in Python?, which basically boils down to the same thing.
But then the last part of the question remains: Why do floats work like that if in binary they are all basically represented by integers?