3

I am a newbie with TF and deployment on GCP. SO thank you so much in advance for your help!

Currently I am trying to deploy my Mnist-handwriting flask app on Google Cloud Platform(GCP) using TensorFlow Serving. I have deployed my model on TF serving, and use a custom MySimpleScaler class to preprocess and resize image before feed it in my model. My question is that, whether there is a way to add the preprocess and resize class in my save model, so that my flask app does not have any tensorflow dependencies. The reason is that the TF library is too large for the app engine.

The flow of my app is as follow:

1) My flask app is deployed on app engine. It has a MySimpleScaler class to resize the image input from the canvas. I allow the user to draw from the canvas on the front end --> get the data using jquery --> using the parse_image function to write it as a output.jpg --> read the output.jpg from local drive and feed it to MySimpleScaler to preprocess

2) My model is deploy on AI platform using TF serving. I send a predict request using output from MysimpleScaler in step 1. The predict value is then pushed to Flask backend, and then I push it to frontend by using Jinja

Here is the two function that I use to get and preprocess data:

def parse_image(imgData):
    # imgData fetch img from canvas using request.get_data()
    imgstr = re.search(b"base64,(.*)", imgData).group(1)
    img_decode = base64.decodebytes(imgstr)
    with open("output.jpg", "wb") as file:
        file.write(img_decode)
    return img_decode
class MySimpleScaler(object):

    def preprocess_img(self, img_decode):
        # img_decode from parse_image
        img_raw = img_decode
        image = tf.image.decode_jpeg(img_raw, channels=1)
        image = tf.image.resize(image, [28, 28])
        image = (255 - image) / 255.0  # normalize to [0,1] range
        image = tf.reshape(image, (1, 28, 28, 1))

        return image

TL;DR: I want to add preprocess_img function as one of the layer in my save model before deploy it to the TF serving. Thank you so much for your time!

novicecoder
  • 95
  • 1
  • 1
  • 5

1 Answers1

3

If you are okay withbatch_size=1 it should be straight forward to add the preprocessing function inside the graph, here is how I'd do it,

Code:

import tensorflow as tf
import numpy as np

print('TensorFlow:',tf.__version__)

def preprocess_single_image(image_bytes, h=299, w=299):
    image = tf.image.decode_jpeg(image_bytes[0], channels=3)
    image = tf.image.resize(image, size=[h, w])
    image = (image - 127.5) / 127.5
    image = tf.expand_dims(image, axis=0)
    return image

image_bytes = tf.keras.Input(shape=[], batch_size=1, name='image_bytes', dtype=tf.string)
preprocessed_image = preprocess_single_image(image_bytes)
model = tf.keras.applications.Xception(weights='imagenet')
predictions = model(preprocessed_image)
new_model = tf.keras.Model(image_bytes, predictions)
new_model.save('export/1', save_format='tf')
print('Model Input Shape:', new_model.input_shape)

### !wget -q -O "cat.jpg" "https://images.pexels.com/photos/617278/pexels-photo-617278.jpeg?cs=srgb&dl=adorable-animal-blur-cat-617278.jpg&fm=jpg"
loaded_model = tf.saved_model.load('export/1')
cat_bytes = tf.expand_dims(tf.io.read_file('cat.jpg'), axis=0)
preds = loaded_model(cat_bytes).numpy()
print(tf.keras.applications.xception.decode_predictions(preds, top=3)[0])

Output:

TensorFlow: 2.0.0
WARNING:tensorflow:From /tensorflow-2.0.0/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Assets written to: export/1/assets

Model Input Shape: (1,)
[('n02123045', 'tabby', 0.5762127), ('n02123159', 'tiger_cat', 0.24783427), ('n02124075', 'Egyptian_cat', 0.09435685)]

PS: You could use tf.map_fn if you want to extend this to support batch_size > 1

Srihari Humbarwadi
  • 2,532
  • 1
  • 10
  • 28
  • Thank you so much for your help! I just have one more stupid question with the tf.keras.Input(). Why did you use tf.expand_dims() to read image instead of passing it directly to tf.io.read_file() to get the string bytes? Is the tf.keras.Input() requires a specific shape? I tried with no expand dims input and it yield this error: `Shape must be rank 0 but is rank 1 for 'DecodeJpeg' (op: 'DecodeJpeg') with input shapes: [1].` – novicecoder Dec 17 '19 at 04:21
  • keras models need a `batch` axis, that forces you to send your image bytes with a `batch` axis. But `tf.io.read_file` doesn't support batch. So you need to reshape your tensor accordingly! – Srihari Humbarwadi Dec 17 '19 at 06:50