1

I want to leverage google's AI-platform to deploy my keras model, which requires the model to be in a tensorflow SavedModel format. I am saving a keras model to a tensorflow estimator model, and then exporting this estimator model. I run into issues in defining my serving_input_receiver_fn.

Here is a summary of my model:

Model: "model_49"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_49 (InputLayer)        [(None, 400, 254)]        0
_________________________________________________________________
gru_121 (GRU)                (None, 400, 64)           61248
_________________________________________________________________
gru_122 (GRU)                (None, 64)                24768
_________________________________________________________________
dropout_73 (Dropout)         (None, 64)                0
_________________________________________________________________
1M (Dense)                   (None, 1)                 65
=================================================================
Total params: 86,081
Trainable params: 86,081
Non-trainable params: 0
_________________________________________________________________

and here is the error I run into:

KeyError: "The dictionary passed into features does not have the expected 
inputs keys defined in the keras model.\n\tExpected keys: 
{'input_49'}\n\tfeatures keys: {'col1','col2', ..., 'col254'}

Below is my code.

def serving_input_receiver_fn():
    feature_placeholders = {
        column.name: tf.placeholder(tf.float64, [None]) for column in INPUT_COLUMNS
    }

    # feature_placeholders = {
    #     'input_49': tf.placeholder(tf.float64, [None])
    # }
    features = {
        key: tf.expand_dims(tensor, -1)
        for key, tensor in feature_placeholders.items()
    }

    return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)

def run():
    h5_model_file = '../models/model2.h5'
    json_model_file = '../models/model2.json'
    model = get_keras_model(h5_model_file, json_model_file)
    print(model.summary())

    estimator_model = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='estimator_model')
    export_path = estimator_model.export_saved_model('export', 
    serving_input_receiver_fn=serving_input_receiver_fn)

It seems that my model expects a single feature key: input_49 (first layer of my neural network), however, from the code samples I've seen for example, the serving_receiver_input_fn feeds a dict of all features into my model.

How can I resolve this?

I am using tensorflow==2.0.0-beta1.

Engineero
  • 12,340
  • 5
  • 53
  • 75
AntsaR
  • 380
  • 1
  • 9
  • 28

2 Answers2

1

I've managed to save a Keras model and host it using TF Serving using the tf.saved_model.Builder() object. I'm not sure if this can be easily generalized to your application, but below is what worked for me, made as general as I can make it.

# Set the path where the model will be saved.
export_base_path = os.path.abspath('models/versions/')
model_version = '1'
export_path = os.path.join(tf.compat.as_bytes(export_base_path),
                           tf.compat.as_bytes(model_version))
# Make the model builder.
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Define the TensorInfo protocol buffer objects that encapsulate our
# input/output tensors.
# Note you can have a list of model.input layers, or just a single model.input
# without any indexing. I'm showing a list of inputs and a single output layer.
# Input tensor info.
tensor_info_input0 = tf.saved_model.utils.build_tensor_info(model.input[0])
tensor_info_input1 = tf.saved_model.utils.build_tensor_info(model.input[1])
# Output tensor info.
tensor_info_output = tf.saved_model.utils.build_tensor_info(model.output)

# Define the call signatures used by the TF Predict API. Note the name
# strings here should match what the layers are called in your model definition.
# Might have to play with that because I forget if it's the name parameter, or
# the actual object handle in your code.
prediction_signature = (
    tf.saved_model.signature_def_utils.build_signature_def(
        inputs={'input0': tensor_info_input0, 'input1': tensor_info_input1},
        outputs={'prediction': tensor_info_output},
        method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))

# Now we build the SavedModel protocol buffer object and then save it.
builder.add_meta_graph_and_variables(sess,
                                     [tf.saved_model.tag_constants.SERVING],
                                     signature_def_map={'predict': prediction_signature})
builder.save(as_text=True)

I will try to find the references that got me here, but I failed to make a note of them at the time. I'll update with links when I find them.

Engineero
  • 12,340
  • 5
  • 53
  • 75
0

I ended up changing the following:

feature_placeholders = {
    column.name: tf.placeholder(tf.float64, [None]) for column in INPUT_COLUMNS
}

to this:

   feature_placeholders = {
    'input_49': tf.placeholder(tf.float32, (254, None), name='input_49')
}

and I was able to get a folder with my saved_model.pb.

AntsaR
  • 380
  • 1
  • 9
  • 28