I want to leverage google's AI-platform to deploy my keras model, which requires the model to be in a tensorflow SavedModel format. I am saving a keras model to a tensorflow estimator model, and then exporting this estimator model. I run into issues in defining my serving_input_receiver_fn
.
Here is a summary of my model:
Model: "model_49"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_49 (InputLayer) [(None, 400, 254)] 0
_________________________________________________________________
gru_121 (GRU) (None, 400, 64) 61248
_________________________________________________________________
gru_122 (GRU) (None, 64) 24768
_________________________________________________________________
dropout_73 (Dropout) (None, 64) 0
_________________________________________________________________
1M (Dense) (None, 1) 65
=================================================================
Total params: 86,081
Trainable params: 86,081
Non-trainable params: 0
_________________________________________________________________
and here is the error I run into:
KeyError: "The dictionary passed into features does not have the expected
inputs keys defined in the keras model.\n\tExpected keys:
{'input_49'}\n\tfeatures keys: {'col1','col2', ..., 'col254'}
Below is my code.
def serving_input_receiver_fn():
feature_placeholders = {
column.name: tf.placeholder(tf.float64, [None]) for column in INPUT_COLUMNS
}
# feature_placeholders = {
# 'input_49': tf.placeholder(tf.float64, [None])
# }
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def run():
h5_model_file = '../models/model2.h5'
json_model_file = '../models/model2.json'
model = get_keras_model(h5_model_file, json_model_file)
print(model.summary())
estimator_model = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='estimator_model')
export_path = estimator_model.export_saved_model('export',
serving_input_receiver_fn=serving_input_receiver_fn)
It seems that my model expects a single feature key: input_49
(first layer of my neural network), however, from the code samples I've seen for example, the serving_receiver_input_fn
feeds a dict of all features into my model.
How can I resolve this?
I am using tensorflow==2.0.0-beta1.