3

This is a follow-on to "how-to-build-and-use-google-tensorflow-c-api" : can any one explain how to build a Tensorflow C++ program on an ARM processor? I'm thinking specifically of Nvidia's Jetson family of GPU devices. Nvidia has lots and lots of documentation for these, but it all seems to be for Python (like this), for toy examples, and nothing for anyone who wants to write a C++ program using the full tensorflow API (if one even exists) for their own machine learning models. I'd like to be able to build programs like this one, which is a deep learning inference and exactly what the Jetson is supposedly made for.

I've found Web sites that offer links to installers too, but they all seem to be for the x86 architecture instead of ARM.

I have the same question about Bazel. I gather from all the unsatisfactory documentation I've been looking at that Bazel is mandatory for anyone who wants to build tensorflow programs using a GPU, but all of the installation instructions I can find are either incomplete or for a different architecture such as x86 (for example https://www.osetc.com/en/how-to-install-bazel-on-ubuntu-14-04-16-04-18-04-linux.html

I'll add that any link or github repository that dumps a load of code in my lap without making clear the prerequisites (since my little Jetson may not have the stuff installed that you assume) or the commands needed to actually build it (especially if it includes a project file for a compiler I never heard of) isn't very much help.

user2084572
  • 331
  • 3
  • 12
  • I'm very interested in this question. But let me ask you what's your reason to write TF code in C/C++? – Hamed Jan 20 '20 at 17:51
  • Have you tried this [link](https://www.tensorflow.org/lite/guide/build_arm64)? I think you need to compile Tensorflow Lite. – Hamed Jan 20 '20 at 17:54
  • For your first question, the reasons are 1) my employer wants me to benchmark performance vs. the Python inference; 2) The TensorRT guide says specifically "The C++ API should be used in any performance-critical scenarios, as well as in situations where safety is important, for example, in automotive. " ( https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html) both of which reasons apply here. – user2084572 Jan 27 '20 at 19:42
  • Even more specifically, I'd like to try building the example program at https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc and those headers aren't part of Tensorflow Lite. – user2084572 Jan 27 '20 at 19:43
  • Well as far as I know, using tensorflow in even Python does not mean you're running Python code. TF uses swig to glue python with C. In other words, when you run TF, you're actually running C code not Python. However, Python attempts to construct the graph and checks some parameters, etc. The actual running is with C API. – Hamed Jan 28 '20 at 15:27
  • Take a look https://dev.to/martinezpeck/challenge-accepted-build-tensorflow-c-binding-for-raspberry-pi-in-2019-4f89 If you cross-compile for ARM64 with the CUDA option enabled it could work. The guy from the post tried doing it for RPI, but the process would be almost the same for Jetson (just the CUDA flag I believe). You will have to configure everything and solve many errors, could take 1 week+, IMO. – Bersan May 05 '20 at 18:10

0 Answers0