2

I am trying to alter my inception network (coded in keras) to take base64 image strings as input for predictions. After that I want to save it as a tensorflow (.pb - file) network since that's what Google ml engine requires.

Normal way of predicting is as this :

img = "image.jpg"
image = image.load_img(img)


x = image.img_to_array(image)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
score = model.predict(x)

So I'm trying to implement this and then save it like this:

input_images = tf.placeholder(dtype=tf.string, shape=[])
decoded = tf.image.decode_image(input_images, channels=3)
image = tf.cast(decoded, dtype=tf.uint8)
afbeelding = Image.open(io.BytesIO(image))

x = image.img_to_array(afbeelding)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
scores = model.predict(decoded)


signature = predict_signature_def(inputs={'image_bytes': input_images},
                              outputs={'predictions': scores})

with K.get_session() as sess:
    builder.add_meta_graph_and_variables(sess=sess,
                                     tags=[tag_constants.SERVING],
                                     signature_def_map={
                                     signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()

But image as a tensor, not an actual image. To be honest I don't know how to fully implement it. There's no way of getting the actual value of a tensor right? Really hope someone can help me with this.

JonasP
  • 50
  • 4
VinceVBC
  • 21
  • 3

1 Answers1

0

You should be able to use the tensorflow.keras.estimator.model_to_estimator() function to convert your Keras model to a TensorFlow estimator. Then you can build and export the graph for generating predictions. The code should look something like this:

from tensorflow import keras
h5_model_path = os.path.join('path_to_model.h5')
estimator = keras.estimator.model_to_estimator(keras_model_path=h5_model_path)

I've only tested this with models built using tf.keras, but it should with with native Keras models.

Then for building the graph with the components to handle the base64 input, you can do something like this:

import tensorflow as tf
HEIGHT = 128
WIDTH = 128
CHANNELS = 3
def serving_input_receiver_fn():
    def prepare_image(image_str_tensor):
        image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
        image = tf.expand_dims(image, 0)
        image = tf.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
        image = tf.squeeze(image, axis=[0])
        image = tf.cast(image, dtype=tf.uint8)
        return image

    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
        prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
    images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver(
        {'input': images_tensor},
        {'image_bytes': input_ph})

export_path = 'exported_model_directory'
estimator.export_savedmodel(
    export_path,
    serving_input_receiver_fn=serving_input_receiver_fn)

The exported model can then be uploaded to Google Cloud ML and be used to serve predictions. I spent a while struggling to get all of this stuff working and put together a fully functional code example that might be of additional use. It is here: https://github.com/mhwilder/tf-keras-gcloud-deployment.

mhwilder
  • 45
  • 4