0

I am experimenting with Keras variable to build a custom loss function and I stumbled in a strange behavior. Let's take this elementwise operation in np.arrays

np_yt = np.arange(10)/10
np_yw = np.arange(10)
np_yt * np_yw

The output is

array([0. , 0.1, 0.4, 0.9, 1.6, 2.5, 3.6, 4.9, 6.4, 8.1])

I try to do the same with Keras variables

yt = K.variable(np.arange(10)/10)
yw = K.variable(np.arange(10))
K.eval( yt*yw )

The output is

array([0.        , 0.1       , 0.4       , 0.90000004, 1.6       ,
       2.5       , 3.6000001 , 4.9       , 6.4       , 8.099999  ],
      dtype=float32)

There is apparently a significant rounding error. My question is: is this expected, like e.g. explained in Is floating point math broken? ? But if so, why this happens with Keras and not in plain numpy? Is floating point binary representation different in the two cases?

Marcello
  • 327
  • 1
  • 2
  • 11
  • 1
    The difference is in how the values are rounded for display, most likely. – kindall Sep 05 '18 at 22:21
  • Either how the values are formatted for display or the first values are computed with “double” precision (64-bit floating-point) while the second values are computed with “float”/“single” precision (32-bit). – Eric Postpischil Sep 05 '18 at 23:20

0 Answers0