I want to save my tensorflow model and restore it later for predicting, and I use the estimator's export_savedmodel
to save the model.
As to docs, I use serving_input_receiver_fn
to specify the input. I also want to use export_outputs
to specify the output, but I am not understanding the difference between predictions and export_outputs?
if mode == tf.estimator.ModeKeys.PREDICT:
export_outputs = {
'predict_output': tf.estimator.export.PredictOutput({
'class_ids': predicted_classes[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits
})
}
predictions = {
'class': predicted_classes[:, tf.newaxis],
'prob': tf.nn.softmax(logits),
'logits': logits,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions, export_outputs=export_outputs)
Another problem is how to use the saved pb model to predict in a session?
with tf.Session(graph=tf.Graph()) as sess:
model_path = 'model/1535016490'
tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], model_path)
inputs = sess.graph.get_tensor_by_name('input_example:0')
# how to get the output tensor?
# outputs = sess.graph.get_tensor_by_name()
res = sess.run([outputs], feed_dict={inputs: examples})
I can use the tensorflow.contrib.predictor
to get some result, but I want an universal method for our team will restore the model with C++. So I think get tensors and run them in a session maybe the method I want?
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(
export_dir='model/1535012949',
signature_def_key='predict_output',
tags=tf.saved_model.tag_constants.SERVING
)
predictions = predict_fn({'examples': examples})
Very thanks for your help!