Using instructions from Link, I have retrained tensorflow inception for new categories.
But I noticed that subsequently if I want to classify a set of images, it goes through the images one by one and then classifies it. If the data set is huge, it takes a long time to finish the classification. E.g 45 minutes for 1000 images.
For the image classification I am using the LabelImage.py available online as below:
import tensorflow as tf
import sys
image_path = sys.argv[1] #Pass the test file as argument
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Loads label file (the retained labels from retraining) and strips off carriage return
label_lines = [line.rstrip() for line
in tf.gfile.GFile("/tf_files/tf_files/retrained_labels.txt")]
# Unpersists graph from file
with tf.gfile.FastGFile("/tf_files/tf_files/retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
# Feed the image_data as input to the graph and get first prediction i.e. the most likely result
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data})
# Sort to show labels of first prediction in order of confidence
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
print('%s (score = %.5f)' % (human_string, score))
As you can notice, it processes one image by image.
Is it possible to speed up the process? As I was retraining the library it is not compiled for multiple GPUs. Is there any other ways to speed up the classification process?