Say I have a network X and I want to add another layer on top of X, like this:
with tf.variable_Scope("X_Layer_10"):
W = tf.get_variable(initializer=tf.random_normal([10,5], stddev=0.1), name='W')
b = tf.get_variable(
initializer= tf.zeros(5), dtype=tf.float32, name='b')
out=tf.softmax(tf.add(tf.matmul(layer_9_output,W),b))
X is already trained but I want to train my own layer (fine tuning) Now, to add my own layer I do the following:
with tf.variable_Scope("my_layer"):
W = tf.get_variable(initializer=tf.random_normal([5,2], stddev=0.1), name='W')
b = tf.get_variable(
initializer= tf.zeros(2), dtype=tf.float32, name='b')
final_output=tf.add(tf.matmul(out,W),b)
Now my expectation is that the above code REuses the already trained part from X, however when I print all the variable that are in "my_layer" scope I see things like: my_layer/X_Layer_10/W:0 which looks like it has made new tensors and not reusing the previously trained weights. Am I missing something?