I read the document of Distributed TensorFlow and have a question about between-graph replication. https://www.tensorflow.org/versions/master/how_tos/distributed/index.html
In my understanding, between-graph replication training creates same number of graphs as workers and the graphs share tf.Variables
on parameter servers.
That is, one worker creates one session and one graph, and all graphs share same tf.Variable
.
However, I just thought two different sessions can not share the same tf.Variable
.
Is it misunderstanding?