4

I am following Tensorflow serving documentation to convert my trained model into a format that can be served in Docker container. As I'm new to Tensorflow, I am struggling to convert this trained model into a form that will be suitable for serving.

The model is already trained and I have the checkpoint file and .meta file. So, I need to get the .pb file and variables folder from the above two files. Can anyone please suggest me an approach on how to get this done for serving the models?

.
|-- tensorflow model
|       -- 1
|       |-- saved_model.pb
|       -- variables
|           |-- variables.data-00000-of-00001
|           -- variables.index
Ashish
  • 51
  • 1
  • 2

3 Answers3

2

There is multiple ways of doing this, and other methods could be required for more complex models. I am currently using the method described here, which works great for tf.keras.models.Model and tf.keras.Sequential models (not sure for tensorflow subclassing?).

Below is a minimal working example, including creating a model using python (it seems like you have already completed this by your folder structure and can ignore the first step)

import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K

inputs = Input(shape=(2,))
x = Dense(128, activation='relu')(inputs)
x = Dense(32, activation='relu')(x)
outputs = Dense(1)(x)

model = Model(inputs=inputs, outputs=outputs)
model.compile(optimizer='adam', loss='mse')

# loading existing weights, model architectural must be the same as the existing model
#model.load_weights(".//MODEL_WEIGHT_PATH//WEIGHT.h5") 

export_path = 'SAVE_PATH//tensorflow_model//1'

with K.get_session() as sess:
    tf.saved_model.simple_save(
            sess,
            export_path,
            inputs={'inputs': model.input}, # for single input
            #inputs={t.name[:-5]: t for t in model.input}, # for multiple inputs
            outputs={'outputs': model.output})

I suggest you use folder name "tensorflow_model" instead of "tensorflow model", to avoid possible problems with spaces.

Then we can build the docker image in terminal by (for windows, use ^ instead of \ for line brake, and use //C/ instead of C:\ in path):

docker run -p 8501:8501 --name tfserving_test \
  --mount type=bind,source="SAVE_PATH/tensorflow_model",target=/models/tensorflow_model \
  -e MODEL_NAME=tensorflow_model -t tensorflow/serving

Now the container should be up and running, and we can test the serving with python

import requests
import json
#import numpy as np

payload = {
  "instances": [{'inputs': [1.,1.]}]
}

r = requests.post('http://localhost:8501/v1/models/tensorflow_model:predict', json=payload)
print(json.loads(r.content))
# {'predictions': [[0.121025]]}

The container is working with our model, giving the prediction 0.121025 for the input [1., 1.]

KrisR89
  • 1,483
  • 5
  • 11
  • I've changed that to Tensorflow_model, but still I'm struggling to restore variables from my VGG-16 model trained using Tensorflow. Without variables how can I serve it using Docker? .@KrisR89 – Ashish Jun 18 '19 at 10:55
  • I'm not sure if i understand you question fully @Ashish, but "tf.saved_model.simple_save()", saves the model with the same folder structure as you are giving in your question. Then you only need to point to your parent folder "tensorflow_model", and tensorflow-servings should automatically use the correct model (.pb), variables and variable.index – KrisR89 Jun 18 '19 at 11:05
  • Having said that, I am struggling to restore the trained model, so I don't have any of those files as seen in the above structure. Your example depicts to the serving method for Keras and I'm looking the same for Tensorflow. – Ashish Jun 18 '19 at 11:09
  • I am so sorry! did not read you question good enough! I am not an expert at the tensorflow low-level api, as it seems like tf2.0 will shift focus away from graphs and more towards tf.keras and subclassing (if you are new to tensorflow i suggest you do the same). That being said, i think you still should be able to use the "tf.saved_model.simple_save()" using your model session and input and output tensors. – KrisR89 Jun 18 '19 at 11:20
1

I hope this helps:

import tensorflow as tf
from tensorflow.contrib.keras import backend as K
from tensorflow.python.client import device_lib

K.set_learning_phase(0)
model = tf.keras.models.load_model('my_model.h5')


export_path = './'
with K.get_session() as sess:
    tf.saved_model.simple_save(
        sess,
        export_path,
        inputs={'input_image': model.input},
        outputs={t.name: t for t in model.outputs}
    )
    print('Converted to SavedModel!!!')
Yesken
  • 11
  • 2
0

From your question, do you mean you no more have access to Model and you have only Check Point files and .meta files?

If that is the case, you can refer the below links which has the code for converting those files into '.pb' file.

Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file

https://github.com/petewarden/tensorflow_makefile/blob/master/tensorflow/python/tools/freeze_graph.py

If you have access to the Trained Model, then I guess you are saving it currently using tf.train.Saver. Instead of that, you can Save the Model and Export it using any of the three (commonly used) functions mentioned below:

  1. tf.saved_model.simple_save => In this case, only Predict API is supported during Serving. Example of this is mentioned by KrisR89 in his answer.

  2. tf.saved_model.builder.SavedModelBuilder => In this case, you can define the SignatureDefs, i.e., the APIs which you want to access during Serving. You can find example on how to use it in the below link, https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py

  3. Third way is shown below:

    classifier = tf.estimator.DNNClassifier(config=training_config, feature_columns=feature_columns,hidden_units=[256, 32], optimizer=tf.train.AdamOptimizer(1e-4),n_classes=NUM_CLASSES,dropout=0.1, model_dir=FLAGS.model_dir)

    classifier.export_savedmodel(FLAGS.saved_dir,

    serving_input_receiver_fn=serving_input_receiver_fn)

The Example on how to save model using Estimators can be found in the below link. This supports Predict and Classification APIs.

https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_premodeled_estimator.py

Let me know if this information helps or if you need any further help.