I have a small network. Trained [many hours] and saved to a checkpoint. Now, I want to restore from checkpoint, in a different script, and use it. I recreate the session: build the entire network, s.t. all ops are created again, using the exact same code I did before training. This code sets the random seed for TF, using time.time() [which is different every run].
I then restore from a checkpoint. I run the network, and get different numbers [small but meaningful differences] every time I run the restored network. Crucially, the input is fixed. If I fixate the random seed to some value, the non deterministic behavior goes away.
I am puzzled because I thought that a restore [no Variables were given to save, so I presume all graph was checkpointed] eliminates all random behavior from this flow. Initializations etc. are overridden by the restored checkpoint, this is only a forward run.
Is this possible? make sense? Is there a way to find out what variables or factors in my graph are not set by the restored checkpoint?