1

I want to save my tensorflow model and restore it later for predicting, and I use the estimator's export_savedmodel to save the model.

As to docs, I use serving_input_receiver_fn to specify the input. I also want to use export_outputs to specify the output, but I am not understanding the difference between predictions and export_outputs?

if mode == tf.estimator.ModeKeys.PREDICT:
    export_outputs = {
        'predict_output': tf.estimator.export.PredictOutput({
            'class_ids': predicted_classes[:, tf.newaxis],
            'probabilities': tf.nn.softmax(logits),
            'logits': logits
        })
    }
    predictions = {
        'class': predicted_classes[:, tf.newaxis],
        'prob': tf.nn.softmax(logits),
        'logits': logits,
    }
    return tf.estimator.EstimatorSpec(mode, predictions=predictions, export_outputs=export_outputs)

Another problem is how to use the saved pb model to predict in a session?

with tf.Session(graph=tf.Graph()) as sess:
    model_path = 'model/1535016490'
    tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], model_path)
    inputs = sess.graph.get_tensor_by_name('input_example:0')
    # how to get the output tensor?
    # outputs = sess.graph.get_tensor_by_name()
    res = sess.run([outputs], feed_dict={inputs: examples})

I can use the tensorflow.contrib.predictor to get some result, but I want an universal method for our team will restore the model with C++. So I think get tensors and run them in a session maybe the method I want?

from tensorflow.contrib import predictor

predict_fn = predictor.from_saved_model(
    export_dir='model/1535012949',
    signature_def_key='predict_output',
    tags=tf.saved_model.tag_constants.SERVING
)

predictions = predict_fn({'examples': examples})

Very thanks for your help!

pyfreyr
  • 177
  • 6

2 Answers2

0

For the first question, I'm not 100% certain, but I believe that predictions are used when you call estimator.predict(...) within a tf.session() whereas the export_outputs is used during serving. By that, I mean, if you have a docker tensorflow/serving or some other server running and loaded with a saved model, and you query it with an input, the response will be based on your export_outputs definition.

I'm sorry I don't know a good answer to your second question. There are so many differing ways to save a tensorflow model at this point that it's hard to tell. I would say to look at the official documentation for save and restore and find the suggested restore method based on how you save your model and whether or not you use estimators.

Also, this question on the frontpage of #tensorflow might be useful.

Good luck~~

Byest
  • 337
  • 2
  • 10
0

For those who landed here looking for information about export_outputs and predictions,make sure to check out this question as well.

Milad Shahidi
  • 627
  • 7
  • 13