Went thru https://tensorflow.github.io/serving/serving_basic and was able to run inference MNIST example using Tensorflow Serving server.
Now, I would like to use a trained modified InceptionV3 model (which generates 2 files: .pb and .txt) to export and use it for inference.
Serving Basic tutorial use mnist_saved_model.py for training and exporting the model. Is this file to be modified for the trained modified InceptionV3 model? Also, what is the difference between mnist_saved_model.py and mnist_export.py?
Looked at How to serve the Tensorflow graph file (output_graph.pb) via Tensorflow serving? but the example mnist_saved_model.py creates a directory called 1 and subdirectories as shown below
$> ls /tmp/mnist_model/1
saved_model.pb variables
Thoughts??