6

To be able to run a TensorFlow lite model that supports native TensorFlow operations, the libtensorflow-lite static library has to be re-compiled. The instructions for doing this in C++ can be found HERE.

It states that

When building TensorFlow Lite libraries using the bazel pipeline, the additional TensorFlow ops library can be included and enabled as follows:

  • Enable monolithic builds if necessary by adding the --config=monolithic build flag.

  • Add the TensorFlow ops delegate library dependency to the build dependencies: tensorflow/lite/delegates/flex:delegate.

Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the delegate is linked into the client library. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.

The thing is that the standard way of building the static lib is via a shell script/make (see the docs HERE; this is for arm64, but there are scripts that can be used for x86_64 as well). There's no obvious way for me to build tensorflow-lite via bazel and modify the build commands there.

Has anybody successfully built this when trying to build models for arm64/x86_64 architectures and can share this? I'm new to bazel and cannot find a detailed walkthrough.

EDIT

After troubleshooting steps proposed by @jdehesa, I was able to build libtensorflowlite.so, but ran into another problem. My app built successfully, but upon execution of the app, the .so file cannot be found:

./myapp: error while loading shared libraries: libtensorflowlite.so: cannot open shared object file: No such file or directory

The paths are correct due to other .so files being located in the same directory which can be found. Also, the app works if using the static library.

To reproduce the issue, I used the tensorflow/tensorflow:devel-gpu-py3 docker build image (instructions found here).

I executed the configure script with default settings, and used the command

bazel build --config=monolithic --define=with_select_tf_ops=true -c opt //tensorflow/lite:libtensorflowlite.so

to create the library. I have uploaded by built library on my personal repo (https://github.com/DocDriven/debug-lite).

Avi
  • 1,424
  • 1
  • 11
  • 32
DocDriven
  • 3,726
  • 6
  • 24
  • 53

1 Answers1

4

EDIT: It seems the experimental option with_select_tf_ops was removed shortly after this was posted. As far as I can tell, there does not seem to be any builtin option to include the TF delegate library in the current build script for libtensorflowlite. If you want to build the library with Bazel, it seems the only option at the moment is to include tensorflow/lite/delegates/flex:delegate in the list of target dependencies, as suggested in the docs.

A few days ago a commit was submitted with initial support for building TFLite with CMake. In that build script there is an option SELECT_TF_OPS to include the delegates library in the build. I don't know if that build works at the moment, but I suppose it will become part of an upcoming official release eventually.


It appears that libtensorflow-lite.a is built with Makefiles, out of Bazel, so I'm not sure if you can actually use that option for that library. There is however an experimental shared library target libtensorflowlite.so that I think may be what you need. You can give the experimental option with_select_tf_ops to include TensorFlow kernels in it. So I think the build command would be something like:

bazel build --config=monolithic --define=with_select_tf_ops=true -c opt //tensorflow/lite:libtensorflowlite.so
jdehesa
  • 58,456
  • 7
  • 77
  • 121
  • Thanks for that, I successfully created a shared library. But for some odd reason, the BUILD file seems to have a bug because the executable cannot find the .so file. The paths are correct because there are other shared objects in the same place. I have found a [similar question](https://stackoverflow.com/questions/49834875/problems-with-using-tensorflow-lite-c-api-in-android-studio-project), but the workaround does not work for me. Have you encountered this problem as well? – DocDriven Nov 04 '19 at 13:57
  • 1
    @DocDriven I haven't used TFLite myself, so I'm not sure what may be the issue... So you linked your C++ app to that .so library and it builds but it fails when running? What is the error message that you get, just `error while loading shared libraries: libtensorflowlite.so: cannot open shared object file: No such file or directory`? – jdehesa Nov 04 '19 at 14:08
  • That is correct. Both the library and the app build without errors, but when executing the app, I do get this exact error you described. The source file of the .so file you linked has multiple cases for the linkopts, but none of them apply to my system (x86_64 architecture Ubuntu 18.04). I tried to replace the select block with the commands from the answer I linked, but without success and the same error. – DocDriven Nov 04 '19 at 14:19
  • @DocDriven Have you checked that is actually the problem? I'd think Bazel or something else in the TF build system would ensure that the soname is set correctly... You can check the soname in your .so library with `objdump -p libtensorflowlite.so | grep SONAME`. About `linkopts`, if you are on Linux I think it's the `//conditions:default` branch, if you want to edit it... Note the answer you linked uses `libtensorflowLite.so` instead of `libtensorflowlite.so` (different letter casing). – jdehesa Nov 04 '19 at 14:40
  • I was not aware of objdump, so I did not check it beforehand. Turns out, soname is correct. I did notice the letter casing. So both these options do not seem to cause the problem. I am sure that the problem has to do with the shared object, because I can build and link my app if I am using the static library created by the shell scripts/make. Both the static and dynamic library reside in the same directory, but the .so file cannot be found. If it helps, I could create a repository for you to test all of the code I use, if you are willing to help me out :) – DocDriven Nov 04 '19 at 14:58
  • @DocDriven I don't have a Linux machine at hand right now unfortunately, but if you upload your built `libtensorflowlite.so` file somewhere at least, to save the source setup and build (indicate source version to grab the correct headers, or include headers with library, and compiler version), it should be easy to test with a minimal compile-and-run program if I get a moment later (or by someone else). – jdehesa Nov 04 '19 at 15:10
  • 1
    I have updated my original post with some of the information. As I used an official docker image to build tensorflow lite, you can pull the correct sources right from dockerhub. Also, I have uploaded by build to github. If you need more information, do not hesitate to ask. Thanks for the support! – DocDriven Nov 04 '19 at 15:48
  • 1
    I have found the problem. The build command is correct, but you have to modify `.bazelrc`. For troubleshooting, see my corresponding github issue: https://github.com/tensorflow/tensorflow/issues/33980. Thanks! – DocDriven Nov 07 '19 at 15:31
  • @DocDriven Thank you for the feedback (and the bounty, not sure it was so deserved), glad you were able to solve it. I did try to get it to work in Docker as you suggested but after some tries I couldn't crack it. Good to know there's a solution after all. – jdehesa Nov 07 '19 at 15:53
  • This thread is a few weeks old, but I came back for a very similar question. Do you know by any chance, which build flag has to be set in order to build this library for ARM? Thanks! – DocDriven Nov 25 '19 at 09:25
  • @DocDriven I don't have actual experience with that I'm afraid... In theory it should be a matter of adding something like `--cpu=arm` / `--cpu=arm64-v8a` / `--cpu=armv7a` / `--cpu=armeabi-v7a` (I took these from the values contemplated in [this file](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/lite/kernels/internal/BUILD)), or maybe something else if for Android... but I'm not sure if you need other options too, and whether you have to setup the toolchain yourself... – jdehesa Nov 25 '19 at 10:48
  • Thank you for the quick response. I was unable to find that file. I will experiment with these flags for while. This proves that the bounty was indeed well deserved! – DocDriven Nov 25 '19 at 12:01
  • Has `with_select_tf_ops` `config_setting` been removed in 2.3? – pooya13 Aug 24 '20 at 19:34
  • 1
    @pooya13 Yes, apparently that option does not exist anymore, I have updated the answer. – jdehesa Aug 25 '20 at 09:55