9

The number of nodes available in the current graph keep increasing at every iteration. This seems unintuitive since the session is closed, and all of it's resources should be freed. What is the reason why the previous nodes are still lingering even when creating a new session? Here is my code:

for i in range(3):
    var = tf.Variable(0)
    sess = tf.Session(config=tf.ConfigProto())
    with sess.as_default():
        tf.global_variables_initializer().run()
        print(len(sess.graph._nodes_by_name.keys()))
    sess.close() 

It outputs:

5
10
15
titus
  • 5,512
  • 7
  • 25
  • 39
  • see answers to this http://stackoverflow.com/questions/33765336/remove-nodes-from-graph-or-reset-entire-default-graph – parsethis Mar 09 '17 at 22:42
  • 2
    Graph is a Python object only existing in Python-land, and TensorFlow C runtime doesn't know about it since it's language agnostic. If you look at session.close, you can see it pretty much just delegates to the C runtime tf_session.TF_CloseDeprecatedSession – Yaroslav Bulatov Mar 10 '17 at 03:59

3 Answers3

19

Closing session does not reset graph by design. If you want to reset graph you can either call tf.reset_default_graph() like this

for _ in range(3):
    tf.reset_default_graph()
    var = tf.Variable(0)
    with tf.Session() as session:
        session.run(tf.global_variables_initializer())
        print(len(session.graph._nodes_by_name.keys()))

or you can do something like this

for _ in range(3):
    with tf.Graph().as_default() as graph:
        var = tf.Variable(0)
        with tf.Session() as session:
            session.run(tf.global_variables_initializer())
            print(len(graph._nodes_by_name.keys()))
Mad Wombat
  • 14,490
  • 14
  • 73
  • 109
  • 1
    IMHO the second approach creates a new graph in every iteration, which is not very clean and may be critical in terms of memory – gizzmole Sep 27 '17 at 14:17
  • 1
    Yes, it creates a new graph, same as the first approach. It does not reset or delete the graph, but since the graph is no longer referenced once the `with` context executes, it will be garbage collected, so it is quite clean. – Mad Wombat Sep 27 '17 at 18:49
  • 1
    Under the hood, `reset_default_graph()` basically drops and recreates the graph as well. Or rather the ops stack of the graph, but it is essentially the same thing. – Mad Wombat Sep 28 '17 at 19:19
2

I ran into session closure issues when running a TensorFlow program from within Spyder. The RNN cells seem to remain and seeking to create new ones of the same name seems to cause problems. This is probably because when running from Spyder, the c-based TensorFlow session does not close properly even if the program has completed its "run" from within Spyder. Spyder has to be restarted in order to get a new session. When running from within Spyder, setting "reuse=True" on the cells gets around this problem. However, this does not seem like a valid mode in iterative programming when training an RNN cell. In that case, some unexpected results/behaviors might occur without the observer knowing what's going on.

  • By restarting spyder, I assume you mean restart the kernel of ipython console in spyder? I had this issue that if I run the ipython twice and the spyder ipython console throws error asking to set reuse=True. To resolve this, either restarting the console or follow Mad Wombat's answer will work. – rort1989 May 31 '18 at 17:40
2

Let's ensure what happens in tf.Session() firstly.

It means you submit the default graph def to tensorflow runtime, the runtime then allocate GPU/CPU/Remote memory accordingly.

So that, when you close the session, the runtime just release all the resource allocated, but leave your graph no touching!

pinxue
  • 1,736
  • 12
  • 17