Many TensorFlow operations have shared_name
as an optional argument.
For example make_initializable_iterator
(related question), most (all?) TF resources (variables, TensorArray
, ...), ConditionalAccumulator
, _MutableDenseHashTable
, FIFOQueue
(related issue), etc.
In the documentation, it often says sth like this:
shared_name: If non-empty, this table will be shared under the given name across multiple sessions.
But how does that work? How do I actually share that resource / tensor / op (or what actually exactly?) across multiple sessions?
Would that be multiple sessions in the same process? Or multiple sessions across multiple processes/machines (remotely)?
Would it share the same memory (only possible if within the same process, or at least same host, by using shared memory)? Or how else would it synchronize the state?
And is Graph.container
related to that? From that doc:
Stateful operations, such as variables and queues, can maintain their states on devices so that they can be shared by multiple processes.
How does the sharing across multiple processes work?
And is distributed TensorFlow (tf.distribute
) related to that? How?
Or remote_call
? (See also this question.)