0

So I'm training a model on a machine with GPU. Of course I save it in the end of the training:

a = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(a)
saver.save(sess, save_path)

Now I have one file, but every time I restore the model from the same file I get different numbers in the matrices, and different predictions for the same examples. I restore the model like this:

saver = tf.train.import_meta_graph('{}.meta'.format(save_path))
sess.run(tf.global_variables_initializer())
saver.restore(sess, save_path)

What is happening here?

omer
  • 33
  • 1
  • 4
  • 1
    If your model produces different results for same input that could also mean there is a source of stochasticity in your model. In any case, it would make sense to upload the model and procedures you use to evaluate values of matrices, so that we, the community, could reproduce your experiment and see what's going on. – abhuse Jul 01 '18 at 13:52
  • I just use sess.run. and it happens even with simple matrix like `char_embeddings = tf.get_variable('char-em', shape=[vocabulary_size, emb_size], initializer=tf.random_uniform_initializer(-1.0, 1.0, dtype=tf.float32))` – omer Jul 01 '18 at 14:53

1 Answers1

0

When you call sess.run(tf.global_variables_initializer()) after importing the frozen graph, you probably reinitialise some variables that you should not.

Instead, you should initialise only the uninitialised variables. One way to do it would be (credit to this answer)

uninitialized_vars = []
for var in tf.all_variables():
    try:
        sess.run(var)
    except tf.errors.FailedPreconditionError:
        uninitialized_vars.append(var)

init_new_vars_op = tf.initialize_variables(uninitialized_vars)
BiBi
  • 7,418
  • 5
  • 43
  • 69
  • Happy to hear this. You can upvote and/or accept this answer for better visibility :). – BiBi Jul 04 '18 at 08:47