I noticed some numerical differences for TensorFlow Dense
layer, if I process some samples in batch or one-by-one.
The differences, using tf.float32
are in the order of 1e-7
/ 1e-8
.
import tensorflow as tf
import numpy as np
BATCH_SIZE = 3
fc_layer = tf.keras.layers.Dense(units=1)
x = tf.convert_to_tensor(np.random.rand(1,43))
x2 = tf.concat([x]*BATCH_SIZE, axis=0)
y = fc_layer(x)
y2 = fc_layer(x2)[0,:]
print((y - y2).numpy().item())
Some questions:
- I assume this is due to how Tensorflow optimizes operations on batches. Correct?
- Is there any way to obtain
0
difference?
Thanks
I tried to set tf.float64
, (tf.keras.backend.set_floatx("float64")
) the differences are in the order of 1e-16
but still present.
NOTE: I'm using TensorFlow 2.9.0
.
Edit: what is the connection between Is floating point math broken? and batch operations?