1

I created a modified lenet model using tensorflow that looks like this:

img_height = img_width = 64
BS = 32

model = models.Sequential()
model.add(layers.InputLayer((img_height,img_width,1), batch_size=BS))
model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), batch_size=BS, activation='relu', padding="valid"))
model.add(layers.Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), batch_size=BS, activation='relu', padding='valid'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), batch_size=BS, padding='valid'))
model.add(layers.Dropout(0.25))
model.add(layers.Conv2D(filters=128, kernel_size=(1,1), strides=(1,1), batch_size=BS, activation='relu', padding='valid'))
model.add(layers.Dropout(0.5))
model.add(layers.Conv2D(filters=2, kernel_size=(1,1), strides=(1,1), batch_size=BS, activation='relu', padding='valid'))
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Activation('softmax'))
model.summary()

When I finish training I save the model using tf.keras.models.save_model :

num = time.time()
tf.keras.models.save_model(model,'./saved_models/' + str(num) + '/')

Then I transform this model into onnx format using "tf2onnx" module:

! python -m tf2onnx.convert --saved-model saved_models/1645088924.84102/ --output 1645088924.84102.onnx

I want a method that can retrieve the same model into tensorflow2.x. I tried to use "onnx_tf" to transform the onnx model into tensorflow .pb model:

import onnx

from onnx_tf.backend import prepare

onnx_model = onnx.load("1645088924.84102.onnx")  # load onnx model
tf_rep = prepare(onnx_model)  # prepare tf representation

But this method generates a .pb file only, but the load_model method in tensorflow2.x requires two additional folders in the same directory as the .pb file which are named as "variables" and "assets".

If there is a way to make the .pb file work as if it has the "assets" and "variables" folders, or if there is a method that can generate complete model from onnx, either solutions would be appreciated.

I'm using a jupyter hub server, and everything is inside anaconda environment.

subspring
  • 690
  • 2
  • 7
  • 23
  • Does [this](https://stackoverflow.com/q/53182177/12750353) help you? If you want only to use the model, then it is better to use [onnxruntime](https://pypi.org/project/onnxruntime/) – Bob Mar 01 '22 at 07:54
  • For the first suggestion, in my use case I can't use tflite. But for the second one, I will take a look at it, thanks. – subspring Mar 01 '22 at 07:57
  • 1
    You can also convert .pb file back to .h5 file and reuse the model. For running inference , you can use graph_def and concrete_function . https://stackoverflow.com/questions/54767281/how-to-convert-pb-file-to-h5-tensorflow-model-to-keras https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/ –  Apr 12 '22 at 05:41

1 Answers1

0

As it turns out, the easiest method to do that is what Tensorflow Support suggested in the comment on the original post, which is to convert the .pb file back to .h5, and then reuse the model. For inferencing, we can use graph_def and concrete_function.

Converting .pb to .h5 : How to convert .pb file to .h5. (Tensorflow model to keras)

For inferencing: https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/

subspring
  • 690
  • 2
  • 7
  • 23