2

My question is related to this Tensorflow: How to get a tensor by name?

I can give names to operations. But actually they named differently. For example:

In [11]: with tf.variable_scope('test_scope') as scope:
    ...:     a = tf.get_variable('a',[1])
    ...:     b = tf.maximum(1,2, name='b')
    ...:     print a.name
    ...:     print b.name
    ...:     
    ...:     
    ...:     
test_scope/a:0
test_scope_1/b:0

In [12]: with tf.variable_scope('test_scope') as scope:
    ...:     scope.reuse_variables()
    ...:     a = tf.get_variable('a',[1])
    ...:     b = tf.maximum(1,2, name='b')
    ...:     print a.name
    ...:     print b.name
    ...:     
    ...:     
    ...:     
test_scope/a:0
test_scope_2/b:0

tf.get_variable creates variable with exactly the same name as I ask. Operations add prefixes to scope.

I want to name my operation so that I can get it. In my case I want to get b with tf.get_variable('b') in my scope.

How can I do it? I can't do it with tf.Variable because of this issue https://github.com/tensorflow/tensorflow/issues/1325 May be I need to set addition parameters to variable scope, or to operation, or somehow use tf.get_variable ?

Community
  • 1
  • 1
ckorzhik
  • 758
  • 2
  • 7
  • 21

2 Answers2

4

I disagree with @rvinas answer, you don't need to create a Variable to hold the value of a tensor you want to retrieve. You can just use graph.get_tensor_by_name with the correct name to retrieve your tensor:

with tf.variable_scope('test_scope') as scope:
    a = tf.get_variable('a',[1])
    b = tf.maximum(1,2, name='b')

print a.name  # should print 'test_scope/a:0'
print b.name  # should print 'test_scope/b:0'

Now you want to recreate the same scope and get back a and b.
For b, you don't even need to be in the scope, you just need the exact name of b.

with tf.variable_scope('test_scope') as scope:
    scope.reuse_variables()
    a2 = tf.get_variable('a', [1])

graph = tf.get_default_graph()
b2 = graph.get_tensor_by_name('test_scope/b:0')

assert a == a2
assert b == b2
Olivier Moindrot
  • 27,908
  • 11
  • 92
  • 91
  • Your solution looks fine. Just for clarification, I store the value of that tensor because of: "In my case I want to get b with tf.get_variable('b') in my scope" – rvinas Aug 28 '16 at 10:47
  • Both solutions work but for now I don't know which is more 'tensorflow way'. I'm just starting to code in tensorflow and don't know which patterns are good and which are not. In your case I need to mantain inner tensorflow variables names, in @rvinas case I need only my variables names. For now I structured code in such way that I need only python variables. I need to read good tensorflow code in their repo, read tensorflow whitepaper and tutorials. May be after that it will be clear for me what way is good in particular case. – ckorzhik Sep 04 '16 at 00:27
3

tf.get_variable() won't work to get an operation. Therefore, I would define a new variable storing tf.maximum(1,2) to retrieve it later:

import tensorflow as tf

with tf.variable_scope('test_scope') as scope:
    a1 = tf.get_variable('a', [1])
    b1 = tf.get_variable('b', initializer=tf.maximum(1, 2))

with tf.variable_scope('test_scope') as scope:
    scope.reuse_variables()
    a2 = tf.get_variable('a', [1])
    b2 = tf.get_variable('b', dtype=tf.int32)

assert a1 == a2
assert b1 == b2

Note that you need to define b using tf.get_variable() in order to retrieve it later.

rvinas
  • 11,824
  • 36
  • 58
  • Thanks! I not sure that I fully understand what initializer is. Would it work for both trainable and non-trainable variables? – ckorzhik Aug 28 '16 at 08:56
  • You're welcome! The initializer is just an operation that sets a variable to an initial value. Yes, it should work for both trainable and non-trainable variables. For trainable variables you need to set parameter 'trainable=True' from tf.get_variable() to add the variable to the collection of trainable variables. – rvinas Aug 28 '16 at 09:04