Situation: I train for a while, then want to save exactly the current train state to disk, and exit. Then I want to continue training later. And it should be exactly the same behavior, as if I would not have exited.
For simplicity, let's say I use SGD, although storing the updater state (Adam etc) is also not a problem.
However, I don't know how to read and store the random state. So when I recreate the graph next time and a new session, it will not continue the random sequence (either I have done it deterministic, then it will just start as it has started the first time, or it will be random).
So, how can I read the random state? Or a random seed such that if I initialize later with that seed, it would continue with the same sequence?
If that is not possible, maybe there are other random generators I can use instead? I found out about tf.contrib.stateless which seems to provide such. E.g. there I could use sth. like:
tf.contrib.stateless.stateless_random_normal(..., seed=global_step * some_number)