Parts of this question have been addressed elsewhere (e.g. is floating point math broken?).
The following reveals a difference in the way numbers are generated by division vs multiplication:
>>> listd = [i/10 for i in range(6)]
>>> listm = [i*0.1 for i in range(6)]
>>> print(listd)
[0.0, 0.1, 0.2, 0.3, 0.4, 0.5]
>>> print(listm)
[0.0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5]
In the second case, 0.3 has a rounding error of about 1e-16, double floating point precision.
But I don't understand three things about the output:
- Since the only numbers here exactly representable in binary are 0.0 and 0.5, why aren't those the only exact numbers printed above?
- Why do the two list comprehensions evaluate differently?
- Why are the two string representations of the numbers different, but not their binary representations?
>>> def bf(x):
return bin(struct.unpack('@i',struct.pack('!f',float(x)))[0])
>>> x1 = 3/10
>>> x2 = 3*0.1
>>> print(repr(x1).ljust(20), "=", bf(x1))
>>> print(repr(x2).ljust(20), "=", bf(x2))
0.3 = -0b1100101011001100110011011000010
0.30000000000000004 = -0b1100101011001100110011011000010