I have created a detection model in tensorflow 2.2.0 and want to use it in a 32-bits x86 platform C application, exclusively to do inference.
Also, I would like that the model inference part run on the same process as the calling module.
Tensorflow doesn't exist for 32-bits, thus I cannot use tensorflow api / python c api.
Tensorflow Lite doesn't support x86.
I've tried using opencv 4 but 'readNetFromTensorflow' seems to fail with tf 2.x models. (the solution given here https://stackoverflow.com/a/45466355/12725394 for tensorflow 1 model didn't work for me)
Overall I didn't find any solution working with the pair 32bits/tensorflow2.x.