2

I've been working with the retain example on github tensorflow hub and I'm coming across a couple of problems when trying to add these two things:

  1. A confusion matrix based on the final test results
  2. A way to log the time of each evaluation in the test set add it to an array

This is the link to the retrain example

Confusion matrix

For the confusion matrix I changed the run eval funtion to the following

def run_final_eval(train_session, module_spec, class_count, image_lists,
               jpeg_data_tensor, decoded_image_tensor,
               resized_image_tensor, bottleneck_tensor):
#Runs a final evaluation on an eval graph using the test data set.

Args:


   train_session: Session for the train graph with the tensors below.
    module_spec: The hub.ModuleSpec for the image module being used.
    class_count: Number of classes
    image_lists: OrderedDict of training images for each label.
    jpeg_data_tensor: The layer to feed jpeg image data into.
    decoded_image_tensor: The output of decoding and resizing the image.
    resized_image_tensor: The input node of the recognition graph.
    bottleneck_tensor: The bottleneck output layer of the CNN graph.

  test_bottlenecks, test_ground_truth, test_filenames = (
      get_random_cached_bottlenecks(train_session, image_lists,
                                    FLAGS.test_batch_size,
                                    'testing', FLAGS.bottleneck_dir,
                                    FLAGS.image_dir, jpeg_data_tensor,
                                    decoded_image_tensor, resized_image_tensor,
                                    bottleneck_tensor, FLAGS.tfhub_module))

  (eval_session, _, bottleneck_input, ground_truth_input, evaluation_step,
   prediction) = build_eval_session(module_spec, class_count)
  test_accuracy, predictions = eval_session.run(
      [evaluation_step, prediction],
      feed_dict={
          bottleneck_input: test_bottlenecks,
          ground_truth_input: test_ground_truth
      })
  tf.logging.info('Final test accuracy = %.1f%% (N=%d)' %
                  (test_accuracy * 100, len(test_bottlenecks)))

  confusion = tf.confusion_matrix(labels=test_ground_truth, predictions=predictions,num_classes=class_count)
  print(confusion)

  if FLAGS.print_misclassified_test_images:
    tf.logging.info('=== MISCLASSIFIED TEST IMAGES ===')
    for i, test_filename in enumerate(test_filenames):
      if predictions[i] != test_ground_truth[i]:
        tf.logging.info('%70s  %s' % (test_filename,
                                      list(image_lists.keys())[predictions[i]]))

The output is :

INFO:tensorflow:Final test accuracy = 88.5% (N=710)
INFO:tensorflow:=== CONwaka ===
Tensor("confusion_matrix/SparseTensorDenseAdd:0", shape=(5, 5), dtype=int32)

I also tried using tf.logging.info with the same result. I want to print it out in array form. I found this Answer by MLninja which seems like a better solution too but I can't figure out how to implement it in the retrain file.

Any help is really appreciated!

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Jowizo
  • 57
  • 1
  • 5

1 Answers1

0

You need to evaluate the confusion matrix tensor. Now you added the confusion matrix operation to the graph and print the operation, but what you want is to print the result of the operation, which is the matrix. In code it would look something like this:

confusion_matrix_np = eval_session.run(
  confusion,
  feed_dict={
      bottleneck_input: test_bottlenecks,
      ground_truth_input: test_ground_truth
  })

print(confusion_matrix_np)
Thomas Pinetz
  • 6,948
  • 2
  • 27
  • 46