You can call the tf.trainable_variables()
if you are only concerned about weights you can optimize. It returns a list of all variables with trainable
parameter set to True
.
tf.reset_default_graph()
# These can be optimized
for i in range(5):
tf.Variable(tf.random_normal(dtype=tf.float32, shape=[32,32]), name="h{}".format(i))
# These cannot be optimized
for i in range(5):
tf.Variable(tf.random_normal(dtype=tf.float32, shape=[32,32]), name="n{}".format(i), trainable=False)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
graph = tf.get_default_graph()
for t_var in tf.trainable_variables():
print(t_var)
Prints:
<tf.Variable 'h0:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h1:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h2:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h3:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h4:0' shape=(32, 32) dtype=float32_ref>
On the other hand tf.global_variables()
returns a list of all variables:
for g_var in tf.global_variables():
print(g_var)
<tf.Variable 'h0:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h1:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h2:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h3:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'h4:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'n0:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'n1:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'n2:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'n3:0' shape=(32, 32) dtype=float32_ref>
<tf.Variable 'n4:0' shape=(32, 32) dtype=float32_ref>
UPDATE
To have more control over the Variables that you'd like to receive there are several way how to filten them. One way is what openmark suggested. In this case you can filter them based on the variable scope prefix.
However, if this is not enough, for example if you wish to access several groups simultaneously there are other ways. You could simply filter them by name, that is:
for g_var in tf.global_variables():
if g_var.name.beginswith('h'):
print(g_var)
However, you have to be aware of the naming convention of tensorflow Variables. That is :0
postfix for example, variable scope prefix and more.
Second way, less involved, is to create your own collections. For example if I am interested in variables that ends with a number divisible by 2 and somewhere else in the code I am interested in variables whose name ends with a number divisible by 4 I could do something like this:
# These can be optimized
for i in range(5):
h_var = tf.Variable(tf.random_normal(dtype=tf.float32, shape=[32,32]), name="h{}".format(i))
if i % 2 == 0:
tf.add_to_collection('vars_divisible_by_2', h_var)
if i % 4 == 0:
tf.add_to_collection('vars_divisible_by_4', h_var)
and then I can simply call tf.get_collection()
function:
tf.get_collection('vars_divisible_by_2)
[<tf.Variable 'h0:0' shape=(32, 32) dtype=float32_ref>,
<tf.Variable 'h2:0' shape=(32, 32) dtype=float32_ref>,
<tf.Variable 'h4:0' shape=(32, 32) dtype=float32_ref>]
or
tf.get_collection('vars_divisible_by_4'):
[<tf.Variable 'h0:0' shape=(32, 32) dtype=float32_ref>,
<tf.Variable 'h4:0' shape=(32, 32) dtype=float32_ref>]