I have a code using tensorflow v1 and I'd like to migrate it toward native tensorflow 2.
The code defines random objects (using numpy.random
or random
, a neural network (keras weight initialization etc) and other tensorflow's random functions. At the end, it makes predictions on a random test set and outputs loss/accuracy of the model.
For this task, I'm having the original code and a copy of it and I'm changing the code of the copy part by part. I want to make sure that the behaviour is the same so I want to set the randomness so that I can monitor if the loss/accuracy change
However, even after setting the seeds of the various random modules in my original file, launching it multiple times still give different loss/accuracy
here are my libraries :
import time
import random
import my_file as mf // file in directory scope
import numpy as np
import copy
import os
from matplotlib import pyplot as plt
import tensorflow.compat.v1 as tf
and I'm setting the seeds at the beginning like that :
tf.set_random_seed(42)
random.seed(42)
np.random.seed(42)
My module my_file
uses the random
library and I'm also setting the seed there
I do understand from the docs that tf.set_random_seed
only sets the global seed and that each random operation in tensorflow is also using its own seed, resulting in different behaviors for consecutive calls. For example if I call the training/testing cell 3 times I get the consecutive value of losses L1 -> L2 -> L3
However, this should still result in the same behavior if I restart the environment so why isn't it the case ? If I restart the kernel and execute 3 times I will get L1' =/= L1 -> L2' =/= L2 -> L3' =/= L3
What else should I verify to make sure the behaviour is the same everytime I restart the notebook kernel ?