What I'm trying to do it this:
graph = tf.get_default_graph()
with graph.as_default():
with tf.Session() as load_sess:
resSaver = tf.train.import_meta_graph(lastModel)
resSaver.restore(load_sess, checkpoint_file)
x = graph.get_operation_by_name("output/x").outputs[0]
[...]
with tf.Session() as run_sess:
sess_run.run(x, feed_dict={...})
The error I get is:
Attempting to use uninitialized value encoder/enc_b_4
and using:
print(sess_run.run(tf.report_uninitialized_variables()))
I can see that all the variables are not initialized. If I run tf.global_variables_initializer() the error goes away but it looks like all the training is wiped out.
I tried to create the new session inside a block like this:
with graph.as_default():
with tf.Session(graph=graph) as sess_run:
[...]
and a few other variations with no luck.
What is the correct way to reuse a graph with multiple/different sessions?
The important points are:
- I want to load/init the graph only once because it takes a lot of time to do it (13 seconds)
- I want to run multiple session at the same time, with a very small pool of threads (maybe this is a bad idea, but this is what I want to discover). But even with a single thread I get this error.
Right now I use the same session to load and to evaluate the variables and it works. I store this session in a global variable and use that for all the requests (one at a time). I suspect there is a better solution.