31

I have generated a .tflite model based on a trained model, I would like to test that the tfilte model gives the same results as the original model.

Giving both the same test data and obtaining the same result.

miaout17
  • 4,715
  • 2
  • 26
  • 32
Jorge Jiménez
  • 678
  • 1
  • 7
  • 22

3 Answers3

33

You may use TensorFlow Lite Python interpreter to test your tflite model.

It allows you to feed input data in python shell and read the output directly like you are just using a normal tensorflow model.

I have answered this question here.

And you can read this TensorFlow lite official guide for detailed information.

You can also use Netron to visualize your model. It allows you to load your .tflite file directly and inspect your model architecture and model weights.

hlzl
  • 479
  • 2
  • 5
  • 17
Jing Zhao
  • 2,420
  • 18
  • 21
  • this worked for me, thank you. by the way the tensorflow lite model doesn't give the same results as the python model, the differences are very big for example python accuracy 79% and tflite accuracy 50% do you know how to improve this? maybe a paramater or a better export function, I am currently using toco convert in the frozen graph – Jorge Jiménez Aug 28 '18 at 07:55
  • 1
    https://stackoverflow.com/questions/52057552/tensorflow-lite-model-gives-very-different-accuracy-value-compared-to-python-mod – Jorge Jiménez Aug 28 '18 at 12:10
  • 1
    I'm not familiar with tflite, sorry I cannot help you. I'd suggest comparing the output arrays of these two models, given the same input array. Actually I also came upon this issue when converting a model to tflite, and in the end I find I used different checkpoint files, which caused the problem. – Jing Zhao Aug 28 '18 at 12:14
2

There is a tflite_diff_example_test in the TensorFlow code base. It generates random data and feed the same data into TensorFlow & TensorFlow lite, then compare if the difference is within a small threshold.

You can to checkout TensorFlow code from Github, and run it with bazel:

bazel run //tensorflow/contrib/lite/testing:tflite_diff_example_test

then you'll see what arguments you need to pass.

miaout17
  • 4,715
  • 2
  • 26
  • 32
  • thank you for your answer. For this I have to have TensorFlow compiled from Source right? is there another way to use try this? (I have tried to compiled Tensorflow with bazel but it always appears errors.) – Jorge Jiménez Jun 10 '18 at 08:08
  • Could you tell me how can i test your answer, it sounds that it could be what I need but how can I test it without compiling all the tensorflow from source? it keeps giving errors – Jorge Jiménez Jun 11 '18 at 19:19
2

In addition to the answer given by @miaout17, to debug / understand your tflite model (which is the spirit of the question), you can

Pannag Sanketi
  • 1,372
  • 1
  • 10
  • 10
  • thank you for your answer. Using flatc, I already create a json file from the tflite model. having that, how can I test that the model behaves or gives the same results as the original model? – Jorge Jiménez Jun 10 '18 at 08:21
  • Not sure if you can directly test using json like that. You can use flatc to generate a python API from the flatbuffer and then use the python API to feed in the same data to both TF and TFLite models and check the answers back. – Pannag Sanketi Jun 11 '18 at 22:52
  • I was trying to export to a tflite format different Classifiers not just the DNN. Could you please help me, how to know how to choose the input or output tensor. How did you know that you should choose: dnn/input_from_feature_columns/input_layer/concat:0 for the input tensor? or dnn/logits/BiasAdd:0 for the output? I already printed all posible tensors in the linear classifier but I dont know what to choose for making it work Could you look at this: https://stackoverflow.com/questions/51267129/how-to-know-which-tensor-to-choose-from-the-list-of-tensor-names-in-graph – Jorge Jiménez Jul 24 '18 at 18:03
  • https://stackoverflow.com/questions/52057552/tensorflow-lite-model-gives-very-different-accuracy-value-compared-to-python-mod – Jorge Jiménez Aug 28 '18 at 12:11