2

I'm trying to use eval() to understand what is happening in each learning step.

However, if I use eval() on an tf.matmul operation, then I would get an error You must feed a value for placeholder tensor.

If I removed the eval(), then everything would work properly as expected.

num_steps = 3001

with tf.Session(graph=graph) as session:
    tf.global_variables_initializer().run()
    writer = tf.summary.FileWriter("/home/ubuntu/tensorboard", graph=tf.get_default_graph())
    for step in range(num_steps):
        offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
        batch_data = train_dataset[offset:(offset + batch_size), :]
        batch_labels = train_labels[offset:(offset + batch_size), :]
        feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
        _, l, predictions, summary = session.run([optimizer, loss, train_prediction, summary_op], feed_dict=feed_dict)
        writer.add_summary(summary, step)

        # If I removed this line, then it would work
        loss.eval()

batch_size = 128

graph = tf.Graph()
with graph.as_default():
    with tf.name_scope('tf_train_dataset'):
        tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
    with tf.name_scope('tf_train_labels'):
        tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    with tf.name_scope('tf_valid_dataset'):
        tf_valid_dataset = tf.constant(valid_dataset)
    with tf.name_scope('tf_test_dataset'):
        tf_test_dataset = tf.constant(test_dataset)

    with tf.name_scope('weights'):
        weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels]))
    with tf.name_scope('biases'):
        biases = tf.Variable(tf.zeros([num_labels]))

    with tf.name_scope('logits'):
        logits = tf.matmul(tf_train_dataset, weights) + biases
    with tf.name_scope('loss'):
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
        tf.summary.scalar("loss", loss)

    with tf.name_scope('optimizer'):
        optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

    with tf.name_scope("train_prediction"):
        train_prediction = tf.nn.softmax(logits)
    with tf.name_scope("valid_prediction"):
        valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
    with tf.name_scope("test_prediction"):
        test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)

    with tf.name_scope("correct_prediction"):
        correct_prediction = tf.equal(tf.argmax(tf_train_labels,1), tf.argmax(train_prediction,1))

    with tf.name_scope("accuracy"):
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        tf.summary.scalar("training_accuracy", accuracy)

    summary_op = tf.summary.merge_all()

The exact error is:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'tf_train_dataset/Placeholder' with dtype float and shape [128,784]
     [[Node: tf_train_dataset/Placeholder = Placeholder[dtype=DT_FLOAT, shape=[128,784], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Does anyone have a better way to log the variables? I've tried tensor_summary, but it doesn't show it on the website.

Thanks all

user1157751
  • 2,427
  • 6
  • 42
  • 73
  • If you just want to print the value of a Tensor during graph execution, you can attach a `tf.Print` node to the graph ([documentation](https://www.tensorflow.org/api_docs/python/control_flow_ops/debugging_operations#Print)). Just make sure you use the output of the `Print` operation in your graph ([example](http://stackoverflow.com/questions/33633370/how-to-print-the-value-of-a-tensor-object-in-tensorflow/36296783#36296783)). – Allen Lavoie Feb 09 '17 at 18:16
  • @AllenLavoie tf.Print prints to the console, and not iPython notebook, any way to change this? Thanks – user1157751 Feb 10 '17 at 02:20
  • `tf.InteractiveSession()` (rather than `tf.Session()`) fixes this for me. – Allen Lavoie Feb 10 '17 at 02:27
  • @AllenLavoie Thanks for your help again. When I changed to InteractiveSession, it gives me a `AttributeError: __exit__` on InteractiveSession. – user1157751 Feb 10 '17 at 03:39
  • @AllenLavoie Actually I commented out the line, and replaced it with `session = tf.InteractiveSession(graph=graph)`, however, it still prints to console instead of notebook. Tensorflow really needs better document than this... – user1157751 Feb 10 '17 at 03:45
  • Apparently I was mistaken. It's an ongoing issue: https://github.com/tensorflow/tensorflow/issues/6438 – Allen Lavoie Feb 10 '17 at 04:09
  • @AllenLavoie Thanks a lot for your help! Well... I guess I'll just use the console for now. With Tensorflow's hundreds of issues, I doubt they will do anything soon. Anyways, thanks a lot for your help. Do you want to post an answer? I'll accept it. – user1157751 Feb 10 '17 at 04:12

1 Answers1

3

Apart from AllenLavoie's comment, you can actually feed the dictionary through eval.

loss.eval(feed_dict=feed_dict)

TensorFlow's weird API does not know that I've already fed the dictionary beforehand.

Hence: _, l, predictions, summary = session.run([optimizer, loss, train_prediction, summary_op], feed_dict=feed_dict)

Does not work even though it is placed before loss.eval()

user1157751
  • 2,427
  • 6
  • 42
  • 73