I am experimenting with Keras variable to build a custom loss function and I stumbled in a strange behavior. Let's take this elementwise operation in np.arrays
np_yt = np.arange(10)/10
np_yw = np.arange(10)
np_yt * np_yw
The output is
array([0. , 0.1, 0.4, 0.9, 1.6, 2.5, 3.6, 4.9, 6.4, 8.1])
I try to do the same with Keras variables
yt = K.variable(np.arange(10)/10)
yw = K.variable(np.arange(10))
K.eval( yt*yw )
The output is
array([0. , 0.1 , 0.4 , 0.90000004, 1.6 ,
2.5 , 3.6000001 , 4.9 , 6.4 , 8.099999 ],
dtype=float32)
There is apparently a significant rounding error. My question is: is this expected, like e.g. explained in Is floating point math broken? ? But if so, why this happens with Keras and not in plain numpy? Is floating point binary representation different in the two cases?