0

What I'm trying to do it this:

graph = tf.get_default_graph()
with graph.as_default():
    with tf.Session() as load_sess:
        resSaver = tf.train.import_meta_graph(lastModel)
        resSaver.restore(load_sess, checkpoint_file)
        x = graph.get_operation_by_name("output/x").outputs[0]
        [...]

with tf.Session() as run_sess:
    sess_run.run(x, feed_dict={...})

The error I get is:

Attempting to use uninitialized value encoder/enc_b_4

and using:

print(sess_run.run(tf.report_uninitialized_variables()))

I can see that all the variables are not initialized. If I run tf.global_variables_initializer() the error goes away but it looks like all the training is wiped out.

I tried to create the new session inside a block like this:

with graph.as_default():
   with tf.Session(graph=graph) as sess_run:
       [...]

and a few other variations with no luck.

What is the correct way to reuse a graph with multiple/different sessions?

The important points are:

  1. I want to load/init the graph only once because it takes a lot of time to do it (13 seconds)
  2. I want to run multiple session at the same time, with a very small pool of threads (maybe this is a bad idea, but this is what I want to discover). But even with a single thread I get this error.

Right now I use the same session to load and to evaluate the variables and it works. I store this session in a global variable and use that for all the requests (one at a time). I suspect there is a better solution.

lorenzo
  • 556
  • 5
  • 13
  • Your approach looks ok, but I'm suspicious of this line `x = graph.get_operation_by_name("output/x")`, that usually looks something like `x = graph.get_operation_by_name("output/x:0")`. Where `0` is an index number. Are you sure you're getting the right value of x? I'd investigate that some. – David Parks Apr 30 '18 at 22:45
  • You are right, that line it wrong but only here on SO, I modified it to simplify the example. Actual line is this: `x = graph.get_operation_by_name("input/x").outputs[0]` . Sorry about that, I'm going to fix it. I made no changes to the working code except for replacing "load_sess" with "run_sess". – lorenzo Apr 30 '18 at 23:05
  • Note that variable state is stored within a Session. So the restore from the load_sess has no affect on sess_run. Why do you want to use multiple sessions? – suharshs May 01 '18 at 04:16
  • @suharshs One reason is I want to try to use multiple threads. I have spare memory on the GPU and low GPU usage (rough estimate based on nvidia-smi). The second reason is that I see the session as a lightweight object that should be used to process a single request and then discarded, but maybe this idea is wrong. I just realized that Session is thread-safe too ([here](https://stackoverflow.com/questions/38694111/is-it-thread-safe-when-using-tf-session-in-inference-service)) so maybe I should share that one. It just seems strange to me to have a session active for weeks or months. – lorenzo May 01 '18 at 11:58
  • 1
    It is ok to have a long running Session. The session supports multithreaded execution of operations and the graph. See here: https://stackoverflow.com/questions/41233635/meaning-of-inter-op-parallelism-threads-and-intra-op-parallelism-threads – suharshs May 01 '18 at 22:53

0 Answers0