By setting the random seed with tf.set_random_seed(1234)
, I can repeat training runs exactly, so far so good. However, I noticed slight deviations when introducing extra variables in the graph. In the following example, B
and C
yield the exactly the same losses, but A
gives something slightly (but not altogether) different. It's important to note that in version C
, intermediate_tensor
ist not attached to anything.
# version A:
output_tensor = input_tensor
# version B:
intermediate_tensor = input_tensor[..., :]
output_tensor = intermediate_tensor
# version C:
intermediate_tensor = input_tensor[..., :]
output_tensor = input_tensor
I would appreciate any insights, as I cannot explain this behaviour. Is it possible that the random number generator is somehow influenced by the graph content?