Info
- I built a Tensorflow (TF) model from Keras and converted it to Tensorflow-Lite (TFL)
- I built an Android app in Android Studio and used the Java API to run the TFL model
- In the Java app, I used the TFL Support Library (see here), and the TensorFlow Lite AAR from JCenter by including
implementation 'org.tensorflow:tensorflow-lite:+'
under mybuild.gradle
dependencies
Inference times are not so great, so now I want to use TFL in Android's NDK.
So I built an exact copy of the Java app in Android Studio's NDK, and now I'm trying to include the TFL libs in the project. I followed TensorFlow-Lite's Android guide and built the TFL library locally (and got an AAR file), and included the library in my NDK project in Android Studio.
Now I'm trying to use the TFL library in my C++ file, by trying to #include
it in code, but I get an error message: cannot find tensorflow
(or any other name I'm trying to use, according to the name I give it in my CMakeLists.txt
file).
Files
App build.gradle:
apply plugin: 'com.android.application'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.ndk.tflite"
minSdkVersion 28
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
cppFlags ""
}
}
ndk {
abiFilters 'arm64-v8a'
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
// tf lite
aaptOptions {
noCompress "tflite"
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
version "3.10.2"
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// tflite build
compile(name:'tensorflow-lite', ext:'aar')
}
Project build.gradle:
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.2'
}
}
allprojects {
repositories {
google()
jcenter()
// native tflite
flatDir {
dirs 'libs'
}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.4.1)
add_library( # Sets the name of the library.
native-lib
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
add_library( # Sets the name of the library.
tensorflow-lite
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log )
target_link_libraries( # Specifies the target library.
native-lib tensorflow-lite
# Links the target library to the log library
# included in the NDK.
${log-lib} )
native-lib.cpp:
#include <jni.h>
#include <string>
#include "tensorflow"
extern "C" JNIEXPORT jstring JNICALL
Java_com_xvu_f32c_1jni_MainActivity_stringFromJNI(
JNIEnv* env,
jobject /* this */) {
std::string hello = "Hello from C++";
return env->NewStringUTF(hello.c_str());
}
class FlatBufferModel {
// Build a model based on a file. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromFile(
const char* filename,
ErrorReporter* error_reporter);
// Build a model based on a pre-loaded flatbuffer. The caller retains
// ownership of the buffer and should keep it alive until the returned object
// is destroyed. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
const char* buffer,
size_t buffer_size,
ErrorReporter* error_reporter);
};
Progress
I also tried to follow these:
- Problems with using tensorflow lite C++ API in Android Studio Project
- Android C++ NDK : some shared libraries refuses to link in runtime
- How to build TensorFlow Lite as a static library and link to it from a separate (CMake) project?
- how to set input of Tensorflow Lite C++
- How can I build only TensorFlow lite and not all TensorFlow from source?
but in my case I used Bazel to build the TFL libs.
Trying to build the classification demo of (label_image), I managed to build it and adb push
to my device, but when trying to run I got the following error:
ERROR: Could not open './mobilenet_quant_v1_224.tflite'.
Failed to mmap model ./mobilenet_quant_v1_224.tflite
- I followed zimenglyu's post: trying to set
android_sdk_repository
/android_ndk_repository
inWORKSPACE
got me an error:WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk')
, and locating these statements at different places resulted in the same error. - I deleted these changes to
WORKSPACE
and continued with zimenglyu's post: I've compiledlibtensorflowLite.so
, and editedCMakeLists.txt
so that thelibtensorflowLite.so
file was referenced, but left theFlatBuffer
part out. The Android project compiled successfully, but there was no evident change, I still can't include any TFLite libraries.
Trying to compile TFL, I added a cc_binary
to tensorflow/tensorflow/lite/BUILD
(following the label_image example):
cc_binary(
name = "native-lib",
srcs = [
"native-lib.cpp",
],
linkopts = tflite_experimental_runtime_linkopts() + select({
"//tensorflow:android": [
"-pie",
"-lm",
],
"//conditions:default": [],
}),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite:framework",
"//tensorflow/lite:string_util",
"//tensorflow/lite/delegates/nnapi:nnapi_delegate",
"//tensorflow/lite/kernels:builtin_ops",
"//tensorflow/lite/profiling:profiler",
"//tensorflow/lite/tools/evaluation:utils",
] + select({
"//tensorflow:android": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//tensorflow:android_arm64": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//conditions:default": [],
}),
)
and trying to build it for x86_64
, and arm64-v8a
I get an error: cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'
.
Checking external/local_config_cc/BUILD
(which provided the error) in line 47:
cc_toolchain_suite(
name = "toolchain",
toolchains = {
"k8|compiler": ":cc-compiler-k8",
"k8": ":cc-compiler-k8",
"armeabi-v7a|compiler": ":cc-compiler-armeabi-v7a",
"armeabi-v7a": ":cc-compiler-armeabi-v7a",
},
)
and these are the only 2 cc_toolchain
s found. Searching the repository for "cc-compiler-" I only found "aarch64", which I assumed is for the 64-bit ARM, but nothing with "x86_64". There are "x64_windows", though - and I'm on Linux.
Trying to build with aarch64 like so:
bazel build -c opt --fat_apk_cpu=aarch64 --cpu=aarch64 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain //tensorflow/lite/java:tensorflow-lite
results in an error:
ERROR: /.../external/local_config_cc/BUILD:47:1: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'aarch64'
Using the libraries in Android Studio:
I was able to build the library for x86_64
architecture by changing the soname
in build config and using full paths in CMakeLists.txt
. This resulted in a .so
shared library. Also - I was able to build the library for arm64-v8a
using the TFLite Docker container, by adjusting the aarch64_makefile.inc
file, but I did not change any build options, and let build_aarch64_lib.sh
whatever it builds. This resulted in a .a
static library.
So now I have two TFLite libs, but I'm still unable to use them (I can't #include "..."
anything for example).
When trying to build the project, using only x86_64
works fine, but trying to include the arm64-v8a
library results in ninja error: '.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it
.
Different approach - build/compile source files with Gradle:
- I created a Native C++ project in Android Studio
- I took the basic C/C++ source files and headers from Tensorflow's
lite
directory, and created a similar structure inapp/src/main/cpp
, in which I include the (A) tensorflow, (B) absl and (C) flatbuffers files - I changed the
#include "tensorflow/...
lines in all of tensorflow's header files to relative paths so the compiler can find them. - In the app's
build.gradle
I added a no-compression line for the.tflite
file:aaptOptions { noCompress "tflite" }
- I added an
assets
directory to the app - In
native-lib.cpp
I added some example code from the TFLite website - Tried to build the project with the source files included (build target is
arm64-v8a
).
I get an error:
/path/to/Android/Sdk/ndk/20.0.5594570/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:2339: error: undefined reference to 'tflite::impl::Interpreter::~Interpreter()'
in <memory>
, line 2339 is the "delete __ptr;"
line:
_LIBCPP_INLINE_VISIBILITY void operator()(_Tp* __ptr) const _NOEXCEPT {
static_assert(sizeof(_Tp) > 0,
"default_delete can not delete incomplete type");
static_assert(!is_void<_Tp>::value,
"default_delete can not delete incomplete type");
delete __ptr;
}
Question
How can I include the TFLite libraries in Android Studio, so I can run a TFL inference from the NDK?
Alternatively - how can I use gradle (currently with cmake) to build and compile the source files?