0

I am trying to build a CNN using Adagrad optimizer but am getting the following error.

tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable_7/Adadelta

[[Node: Adadelta/update_Variable_7/ApplyAdadelta = ApplyAdadelta[T=DT_FLOAT, _class=["loc:@Variable_7"], use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_7, Variable_7/Adadelta, Variable_7/Adadelta_1, Adadelta/lr, Adadelta/rho, Adadelta/epsilon, gradients/add_3_grad/tuple/control_dependency_1)]] Caused by op u'Adadelta/update_Variable_7/ApplyAdadelta',

optimizer = tf.train.AdadeltaOptimizer(learning_rate).minimize(cross_entropy)

I tried reinitializing the session variables after the adagrad statement as mentioned in this post, but that didn't help too.

How can I avoid this error? Thanks.

Tensorflow: Using Adam optimizer

import tensorflow as tf
import numpy
from tensorflow.examples.tutorials.mnist import input_data

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)


mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Parameters
learning_rate = 0.01
training_epochs = 100
batch_size = 1000
display_step = 1


# Set model weights
W = tf.Variable(tf.zeros([784, 10]), name="weights")
b = tf.Variable(tf.zeros([10]), name="bias")

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])


W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])


W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])

# Initializing the variables
init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(training_epochs):
        total_batch = int(mnist.train.num_examples/batch_size)
        for i in range(total_batch):

            batch_xs, batch_ys = mnist.train.next_batch(batch_size)

            x_image = tf.reshape(batch_xs, [-1,28,28,1])

            h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
            h_pool1 = max_pool_2x2(h_conv1)

            h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
            h_pool2 = max_pool_2x2(h_conv2)

            h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
            h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)


            y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2)

            cross_entropy = tf.reduce_mean(-tf.reduce_sum(batch_ys * tf.log(y_conv), reduction_indices=[1]))
            #optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)

            optimizer = tf.train.AdadeltaOptimizer(learning_rate).minimize(cross_entropy)
            sess.run(init)

            correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(batch_ys,1))
            accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
            sess.run([cross_entropy, y_conv,optimizer])
            print cross_entropy.eval()
Community
  • 1
  • 1
Ram
  • 634
  • 1
  • 9
  • 22
  • First, I really think the model should be out of the loop. Put h_*, cross_entropy, optimizer, accuracy, etc. after "b_fc2 = bias_variable([10])" line. – Sung Kim Apr 26 '16 at 02:55

1 Answers1

7

The problem here is that tf.initialize_all_variables() is a misleading name. It really means "return an operation that initializes all variables that have already been created (in the default graph)". When you call tf.train.AdadeltaOptimizer(...).minimize(), TensorFlow creates additional variables, which are not covered by the init op that you created earlier.

Moving the line:

init = tf.initialize_all_variables()

...after the construction of the tf.train.AdadeltaOptimizer should make your program work.

N.B. Your program rebuilds the entire network, apart from the variables, on each training step. This is likely to be very inefficient, and the Adadelta algorithm will not adapt as expected because its state is recreated on each step. I would strongly recommend moving the code from the definition of batch_xs to the creation of the optimizer outside of the two nested for loops. You should define tf.placeholder() ops for the batch_xs and batch_ys inputs, and use the feed_dict argument to sess.run() to pass in the values returned by mnist.train.next_batch().

mrry
  • 125,488
  • 26
  • 399
  • 400