I am trying to implement a deep neural network, where I want to experiment with the number of hidden layers. In order to avoid error-prone code repetition, I have placed the creation of the layers in a for-loop, as follows:
def neural_network_model(data, layer_sizes):
num_layers = len(layer_sizes) - 1 # hidden and output layers
layers = [] # hidden and output layers
# initialise the weights
for i in range(num_layers):
layers.append({
'weights': tf.get_variable("W" + str(i+1),
[layer_sizes[i], layer_sizes[i+1]],
initializer = tf.contrib.layers.xavier_initializer()),
'biases': tf.get_variable("b" + str(i+1), [layer_sizes[i+1]],
initializer = tf.zeros_initializer())
})
...
The list layer_sizes
given as input looks something like this:
layer_sizes = [num_inputs, num_hl_1, num_hl_2, ..., num_hl_n, num_outputs]
When I ran this code for the first time I had no problems. However, when I changed layer_sizes
to have a different number of layers, I got an error:
ValueError: Variable W1 already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope
I understand that this is because of the naming of the layers (which I don't even care about). How can I work around this and allow renaming when rerunning? I've done some googling and the solution seems to lie in the use of with tf.variable_scope()
, but I can't figure out exactly how.
EDIT - Just to be clear: I do not want to reuse any names or variables. I just want to (re-)initialise the weights and biases every time neural_network_model
is called.