21

I've exported my model to ONNX via:

# Export the model
torch_out = torch.onnx._export(learn.model,             # model being run
                           x,                       # model input (or a tuple for multiple inputs)
                          EXPORT_PATH + "mnist.onnx", # where to save the model (can be a file or file-like object)
                           export_params=True)      # store the trained parameter weights inside the model file

And now I am trying to convert the model to a Tensorflow Lite file so that I can do inference on Android. Unfortunately, PyTorch/Caffe2 support is fairly lacking or too complex for Android but Tensorflow appears much simpler.

The documentation for ONNX to Tflite is pretty light on this.

I've tried exporting to a Tensorflow GraphDef proto via:

tf_rep.export_graph(EXPORT_PATH + 'mnist-test/mnist-tf-export.pb')

And then running toco:

toco \
--graph_def_file=mnist-tf-export.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--inference_type=FLOAT \
--input_type=FLOAT \
--input_arrays=0 \
--output_arrays=add_10 \
--input_shapes=1,3,28,28 \
--output_file=mnist.tflite`

When I do though I get the following error:

File "anaconda3/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2018-11-06 16:28:33.864889: I tensorflow/lite/toco/import_tensorflow.cc:1268] Converting unsupported operation: PyFunc
2018-11-06 16:28:33.874130: F tensorflow/lite/toco/import_tensorflow.cc:114] Check failed: attr.value_case() == AttrValue::kType (1 vs. 6)

Further, even when I run the command I don't know what to specify for the input_arrays or output_arrays since the model was originally built in PyTorch.

Has anyone successfully converted their ONNX model to TFlite?

Here's the ONNX file I'm trying to convert: https://drive.google.com/file/d/1sM4RpeBVqPNw1WeCROpKLdzbSJPWSK79/view?usp=sharing

Extra info

  • Python 3.6.6 :: Anaconda custom (64-bit)
  • onnx.version = '1.3.0'
  • tf.version = '1.13.0-dev20181106'
  • torch.version = '1.0.0.dev20181029'
Suhail Doshi
  • 716
  • 1
  • 10
  • 23
  • 1
    Update: Unfortunately there's just not good support for this and I'd (at this time/date) advise going the caffe2 route or making the model in Tensorflow. – Suhail Doshi Mar 10 '19 at 06:24
  • As you also said in your comment, PyTorch now encapsulates Caffe2 so you are directly able to deploy. – Tyathalae May 29 '19 at 15:27
  • Now you can run PyTorch Models directly on mobile phones. check out PyTorch Mobile's documentation: https://pytorch.org/mobile/home/ – Ahwar Sep 24 '20 at 09:21

4 Answers4

15

I think the ONNX file i.e. model.onnx that you have given is corrupted I don't know what is the issue but it is not doing any inference on ONNX runtime.

Now you can run PyTorch Models directly on mobile phones. check out PyTorch Mobile's documentation here

This answer is for TensorFlow version 1,
For TensorFlow version 2 or higher click link

The best way to convert the model from protobuf freezeGraph to TFlite is to use the official TensorFlow lite converter documentation

According to TensorFlow Docs, TocoConverter has been deprecated

This class (tf.compat.v1.lite.TocoConverter) has been deprecated. Please use lite.TFLiteConverter instead.

Convert from PyTorch to ONNX model

The best practice to convert the model from Pytorch to Onnx is that you should add the following parameters to specify the names of the input and output layer of your model in torch.onnx.export() function


# Export the model from PyTorch to ONNX
torch_out = torch.onnx._export(model,             # model being run
                                x,          # model input (or a tuple for multiple inputs)
                                EXPORT_PATH + "mnist.onnx",      # where to save the model (can be a file or file-like object)
                                export_params=True,       # store the trained parameter weights inside the model file
                                input_names=['main_input'],     # specify the name of input layer in onnx model
                                output_names=['main_output'])     # specify the name of input layer in onnx model

So in your case: Now export this model to TensorFlow protobuf FreezeGraph using onnx-tf

Please note that this method is only working when tensorflow_version < 2

Convert from ONNX to TensorFlow freezGraph

To convert the model please install onnx-tf version 1.5.0 from the below command

pip install  onnx-tf==1.5.0

Now to convert .onnx model to TensorFlow freeze graph run this below command in shell

onnx-tf convert -i "mnist.onnx" -o  "mnist.pb"

Convert from TensorFlow FreezeGraph .pb to TF

Now to convert this model from .pb file to tflite model use this code

import tensorflow as tf
# make a converter object from the saved tensorflow file
converter = tf.lite.TFLiteConverter.from_frozen_graph('mnist.pb', #TensorFlow freezegraph .pb model file
                                                      input_arrays=['main_input'], # name of input arrays as defined in torch.onnx.export function before.
                                                      output_arrays=['main_output']  # name of output arrays defined in torch.onnx.export function before.
                                                      )
# tell converter which type of optimization techniques to use
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# to view the best option for optimization read documentation of tflite about optimization
# go to this link https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional

# convert the model 
tf_lite_model = converter.convert()
# save the converted model 
open('mnist.tflite', 'wb').write(tf_lite_model)

To choose which option is best for optimization for your model use case see this official guide about TensorFlow lite optimization

https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional

Note: You can try my Jupyter Notebook Convert ONNX model to Tensorflow Lite on Google Colaboratory link

Ahwar
  • 1,746
  • 16
  • 30
  • I cannot `from onnx_tf.backend import prepare`. Could you tell me the exact version of onnx, onnx_tf and tensorflow that you were using? The complaint is `import tensorflow_addons as tfa` -> `ModuleNotFoundError: No module named 'tensorflow_addons'` – mcExchange Mar 02 '20 at 18:25
  • Seems like the current master branch of onnx-tensorflow is for TF >= 2.0. For TF < 2.0 there is another branch called `tf-1.x` – mcExchange Mar 03 '20 at 11:04
  • Exporting as frozen graph works with the above mentioned branch but when converting to tflite I get `Unexpected value for attribute 'data_format'. Expected 'NHWC' Fatal Python error: Aborted` – mcExchange Mar 03 '20 at 11:22
  • I used onnx-tf_version==1.3 Which only supported tf_version < 2.0 I didn't tested versions after that. And keep in mind to use input_names, output_names in torch.onnx._export()'s parameters – Ahwar Mar 03 '20 at 17:18
  • specifying input and output_names does not get rid of the error message. My pytorch model expects an input of [1, 3, 384, 256] corresponding to [batch-size, channels, height, witdth] -> so it is not 'NHWC', however pytorch does not support 'NHWC' – mcExchange Mar 04 '20 at 14:25
  • @Ahwar : In my case, if `output_names=["main_output"]`, I get the tensor_name of output _array as `main_output`, if I leave `output_names=[]` empty ( just omitted`output_names`). the tensor_name is `109`, and not `add_17` as it should be from the tensor ops dictionary we get from `lib/python3.7/site-packages/tensorflow_core/lite/python/util.py`, so instead of `add_10`, pls edit it as `output_arrays=main_output`, as that is the tensor_name in this case. – aspiring1 May 13 '20 at 07:33
  • All people @mcExchange having issues in importing onnx Note my method is only for tensorflow version less then two 'tensorflow_version < 2' and by using onnx-tf version 1.5.0 from PyPI and to install onnx-tf I used command ```pip install onnx-tf==1.5.0``` – Ahwar May 15 '20 at 11:58
  • 1
    @aspiring1 and mcExchane. Thanks for Reporting All issues regarding importing and 'NHWC' error are resolved and I have improved my code. Please see the process again answer is updated. Again note for now I have on onnx-tf version 1.5.0 installed using PyPI. and tensorflow version must be less then 2. I will update about tensorflow_version > 2 soon. I have also included Google Colab Jupyter Notebook. Try It. – Ahwar May 15 '20 at 12:03
  • 1
    Thx a lot for the Colab Code. The conversion is working finally. However it seems that the converted tflite model is not working on GPU on the smartphone. CPU mode works but also looks much slower (~10 fold) than the corresponding model taken build in TF directly. Did you have similar experience? – mcExchange Jun 12 '20 at 11:13
  • I am working on it will reply here as soon as I find any solution to it. – Ahwar Sep 01 '20 at 18:07
  • @mcExchange I have noticed that CPU inference is slow but Hasn't found any solution. When I run on android GPU it gives the error. Can you please share the detailed error prompt with me that what is causing the issue? – Ahwar Sep 24 '20 at 09:09
1

Now you can run PyTorch Models directly on mobile phones. check out PyTorch Mobile's documentation here

This answer is for TensorFlow version 2 or higher,
For TensorFlow version 1 click here

The best way to convert the model from protobuf freezeGraph to TFlite is to use the official TensorFlow lite converter documentation

According to TensorFlow Docs, TocoConverter has been deprecated

This class (tf.compat.v1.lite.TocoConverter) has been deprecated. Please use lite.TFLiteConverter instead.

Convert from PyTorch to ONNX model

# Export the model from PyTorch to ONNX
torch_out = torch.onnx.export(model,             # model being run
                                x,          # model input (or a tuple for multiple inputs)
                                EXPORT_PATH + "mnist.onnx",      # where to save the model (can be a file or file-like object)
                                export_params=True,       # store the trained parameter weights inside the model file
)

So in your case: Now export this model to TensorFlow protobuf FreezeGraph using onnx-tf

Convert from ONNX to TensorFlow freezeGraph

To convert the model install onnx-tf from the below command

git clone https://github.com/onnx/onnx-tensorflow.git && cd onnx-tensorflow
pip install -e .

Now to convert .onnx model to TensorFlow freeze graph run this below command in shell

onnx-tf convert -i "mnist.onnx" -o  "mnist.pb"

Convert from TensorFlow FreezeGraph .pb to TF

Now to convert this model from .pb file to tflite model use this code

import tensorflow as tf
# make a converter object from the saved tensorflow file
converter = tf.lite.TFLiteConverter.from_saved_model('mnist.pb')
# tell converter which type of optimization techniques to use
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# to view the best option for optimization read documentation of tflite about optimization
# go to this link https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional

# convert the model 
tf_lite_model = converter.convert()
# save the converted model 
open('mnist.tflite', 'wb').write(tf_lite_model)

To choose which option is best for optimization for your model use case see this official guide about TensorFlow lite optimization

https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional

Ahwar
  • 1,746
  • 16
  • 30
0

In Google Colab:

!pip install onnx2keras
import onnx
from onnx2keras import onnx_to_keras

onnx_model = onnx.load('model.onnx')
k_model = onnx_to_keras(onnx_model,['input'],change_ordering=True)

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(k_model)
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)
  • Unfortunately, it doesn't work. onnx2keras has a bug and can not convert convolution layers properly. In particular, there is a problem with adding ZeroPadding2D – Eugene Alexeev Mar 21 '23 at 11:59
  • it trows ValueError: '/backbone/backbone.0/Conv_output_0_pad/' is not a valid root scope name. A root scope name has to match the following pattern: ^[A-Za-z0-9.][A-Za-z0-9_.\\/>-]*$ looking how to resolve forward slash in the node name first symbol – Igor Jun 23 '23 at 16:36
0

I know this is a topic that most people have lost interest in since it has been a long time, but since the development of onnx-tf has been terminated, I am creating my own conversion tool. It is possible to convert from onnx to TensorFlow/Keras/TFLite models. I keep adding and improving commits every day, so I have an injection feature for various error avoidance, although some models may be unlucky enough to have conversion errors.

I would be happy if I could be of any help to you.

https://github.com/PINTO0309/onnx2tf

1. Install

  • Local
    pip install -U onnx \
    && pip install -U nvidia-pyindex \
    && pip install -U onnx-graphsurgeon \
    && pip install -U onnxruntime==1.13.1 \
    && pip install -U onnxsim \
    && pip install -U simple_onnx_processing_tools \
    && pip install -U onnx2tf \
    && pip install -U h5py==3.7.0
    

or

  • Docker
    docker run --rm -it \
    -v `pwd`:/workdir \
    -w /workdir \
    ghcr.io/pinto0309/onnx2tf:1.8.25
    

or

  • Google colabo
    !sudo add-apt-repository -y ppa:deadsnakes/ppa
    !sudo apt-get -y update
    !sudo apt-get -y install python3.9
    !sudo apt-get -y install python3.9-dev
    !sudo apt-get -y install python3-pip
    !sudo apt-get -y install python3.9-distutils
    !wget https://github.com/PINTO0309/onnx2tf/releases/download/1.7.3/flatc.tar.gz \
      && tar -zxvf flatc.tar.gz \
      && sudo chmod +x flatc \
      && sudo mv flatc /usr/bin/
    !python3.9 -m pip install -U setuptools \
      && python3.9 -m pip install -U pip \
      && python3.9 -m pip install -U distlib
    !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1
    !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2
    !python3.9 -m pip install tensorflow==2.12.0 \
      && python3.9 -m pip install -U onnx \
      && python3.9 -m pip install -U nvidia-pyindex \
      && python3.9 -m pip install -U onnx-graphsurgeon \
      && python3.9 -m pip install -U onnxruntime==1.13.1 \
      && python3.9 -m pip install -U onnxsim \
      && python3.9 -m pip install -U simple_onnx_processing_tools \
      && python3.9 -m pip install -U onnx2tf \
      && python3.9 -m pip install -U protobuf==3.20.3 \
      && python3.9 -m pip install -U h5py==3.7.0
    

2. Convert

onnx2tf -i mnist.onnx -osd -cotof

enter image description here

The -cotof option checks for errors between the output of ONNX before conversion and the output of the converted TensorFlow model, so it is not necessary to specify this option.

PINTO0309
  • 36
  • 4