Even though there are plenty of questions here on SO raising the issue of reusing trained tensorflow models it is still a challenge to use one of the most mainstream model Inception-v3 fine-tuned on a custom dataset to just predict probabilities for some single image.
After having done some research on this topic (the most similar SO-thread is sured to be Tensorflow: restoring a graph and model then running evaluation on a single image) I can conclude that having frozen graph.pb file of some trained model is like having a holy grail because you don't need to rebuild graph, choose tensors to restore or whatever — you just call tf.import_graph_def
and take the output layer you need via sess.graph.get_tensor_by_name
.
But the problem is that in the examples provided with tensorflow (e.g. classify_image.py), such «frozen graph»'s have nicely prepared input and output points like DecodeJpeg/contents:0
and softmax:0
respectively where you can feed your custom images and retrieve the answers from, while you don't have such nice entry points when working with custom fine-tuned model.
For example, fine-tuned Inception-v3 model frozen graph will have FIFOQueue
, QueueDequeueMany
and similar dozen of tensors before actual convolutional layers to read batches from TFRecord's and the output tensor will look like tower_0/logits/predictions
with unusable shape containing batch size, so you just don't have appropriate points to feed a new jpeg image in and get predictions out.
Is there any success story covering usage of such batches-fed fine-tuned models with new images? Or maybe some ideas on changing input pack of TFRecord/batch nodes to JPEG one?
P.S. There is also an alternative for running pretrained models such as TF Serving, but building a huge github repo with plenty of dependencies for every other step seems overwhelming to me.