1

I have created a detection model in tensorflow 2.2.0 and want to use it in a 32-bits x86 platform C application, exclusively to do inference.
Also, I would like that the model inference part run on the same process as the calling module.

Tensorflow doesn't exist for 32-bits, thus I cannot use tensorflow api / python c api.
Tensorflow Lite doesn't support x86.
I've tried using opencv 4 but 'readNetFromTensorflow' seems to fail with tf 2.x models. (the solution given here https://stackoverflow.com/a/45466355/12725394 for tensorflow 1 model didn't work for me)

Overall I didn't find any solution working with the pair 32bits/tensorflow2.x.

Nathan
  • 11
  • 2
  • 1
    Are you sure you can't just use 64-bit code? 32-bit is getting more obsolete every year, especially for number crunching. – Peter Cordes Jun 05 '20 at 14:58
  • @PeterCordes It's feasible I guess, but unfortunately recompiling the application in 64-bit would take me days (lots of dependencies and libs don't work currently with 64-bit...) – Nathan Jun 05 '20 at 15:12
  • 1
    You're probably going to want that at some point; now might be a good time to start. Unless you'd still need to support 32-bit builds with tensorflow for computers running 32-bit-only OSes, in which case being *able* to make 64-bit builds wouldn't make this problem go away. – Peter Cordes Jun 05 '20 at 15:16

0 Answers0