1

I am trying to deploy a tf.keras image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb & variable files), so I'm not sure if I need to do this extra step to get it to work.

e.g. this is code directly from GCP Tensorflow Deploying models documentation

def json_serving_input_fn():
  """Build the serving inputs."""
  inputs = {}
  for feat in INPUT_COLUMNS:
    inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)

  return tf.estimator.export.ServingInputReceiver(inputs, inputs)
mrk
  • 8,059
  • 3
  • 56
  • 78
Pysnek313
  • 134
  • 14
  • check here for the other issue when exporting the model: https://stackoverflow.com/questions/54615708/exporting-a-keras-model-as-a-tf-estimator-trained-model-cannot-be-found/54615713#54615713 – sdcbr Feb 11 '19 at 07:20

1 Answers1

0

You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:

# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
    tf.keras.estimator.model_to_estimator(keras_model=model,
                                          model_dir=tf_files_path)

# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
    def prepare_image(image_str_tensor):
        image = tf.image.decode_png(image_str_tensor,
                                    channels=3)
        return image  # apply additional processing if necessary

    # Ensure model is batchable
    # https://stackoverflow.com/questions/52303403/
    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
        prepare_image, input_ph, back_prop=False, dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver(
        {model.input_names[0]: images_tensor},
        {'image_bytes': input_ph})

# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
    export_path,
    serving_input_receiver_fn=serving_input_receiver_fn)

You can refer to this very helpful answer for a more complete reference and other options for exporting your model.

Edit: If this approach throws a ValueError: Couldn't find trained model at ./tf. error, you can try it the workaround solution that I documented in this answer.

sdcbr
  • 7,021
  • 3
  • 27
  • 44