2

I am training an object detection model with Azure customvision.ai. The model output is with tensorflow, either saved model .pb, .tf or .tflite.

The model output type is designated as float32[1,13,13,50]

I then push the .tflite onto a Google Coral Edge device and attempt to run it (previous .tflite models trained with Google Cloud worked, but I'm now bound to corporate Azure and need to use customvision.ai). These commands are with

$ mdt shell

$ export DEMO_FILES="/usr/lib/python3/dist*/edgetpu/demo"

$ export DISPLAY=:0 && edgetpu_detect \
$ --source /dev/video1:YUY2:1280x720:20/1  \
$ --model ${DEMO_FILES}/model.tflite

Finally, the model attempts to run, but results in a ValueError

'This model has a {}.'.format(output_tensors_sizes.size)))
ValueError: Detection model should have 4 output tensors! This model has 1.

What is happening here? How do I reshape my tensorflow model to match the device requirements of 4 output tensors?

The model that works enter image description here

The model that does not work enter image description here

Edit, this outputs a tflite model, but still has only one output

python tflite_convert.py \
--output_file=model.tflite \
--graph_def_file=saved_model.pb \
--saved_model_dir="C:\Users\b0588718\AppData\Roaming\Python\Python37\site-packages\tensorflow\lite\python" \
--inference_type=FLOAT \
--input_shapes=1,416,416,3  \
--input_arrays=Placeholder \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--mean_values=128 \
--std_dev_values=128 \
--allow_custom_ops \
--change_concat_input_ranges=false \
--allow_nudging_weights_to_use_fast_gemm_kernel=true
Iorek
  • 571
  • 1
  • 13
  • 31

1 Answers1

2

You are running an object detection demo where the engine expects 4 outputs from the model and your model only have one outputs. Maybe you had the tflite conversion incorrect? For instance, if you grabbed the Face SSD model from our zoo, conversion should be like this:

$ tflite_convert \ 
--output_file=face_ssd.tflite \
--graph_def_file=tflite_graph.pb \
--inference_type=QUANTIZED_UINT8 \
--input_shapes=1,320,320,3 \
--input_arrays normalized_input_image_tensor \
--output_arrays "TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3" \ 
--mean_values 128 \
--std_dev_values 128 \
--allow_custom_ops \
--change_concat_input_ranges=false \
--allow_nudging_weights_to_use_fast_gemm_kernel=true

Take a look at a similar query for more details: https://github.com/google-coral/edgetpu/issues/135#issuecomment-640677917

Nam Vu
  • 1,727
  • 1
  • 11
  • 24
  • 1
    Hey, thanks! I tried this (left the changes I made above in the OP), and while it still outputs a tflite model, it does not change the model output. Any thoughts? – Iorek Jun 24 '20 at 17:02
  • Humnn, I guess I'm not so sure what type of model it is that Azure produce and made an assumption that it is an ssd mobilenet since you tired to use it with our detection engine. What is it that you're expecting from the outputs of your model? I suggest using the basicenggine or the pure tflite API instead of the detection engine – Nam Vu Jun 24 '20 at 18:28