- Sample projects to use TensorFlow Lite in C++ for multi-platform
- Typical project structure is like the following diagram
-
Platform
- Linux (x64)
- Tested in Xubuntu 18 in VirtualBox in Windows 10
- Linux (armv7)
- Tested in Raspberry Pi4 (Raspbian 32-bit)
- Linux (aarch64)
- Tested in Jetson Nano (JetPack 4.3) and Jetson NX (JetPack 4.4)
- Android (aarch64)
- Tested in Pixel 4a
- Windows (x64). Visual Studio 2017, 2019
- Tested in Windows10 64-bit
- Linux (x64)
-
Delegate
- Edge TPU
- Tested in Windows, Raspberry Pi (armv7) and Jetson NX (aarch64)
- XNNPACK
- Tested in Windows, Raspberry Pi (armv7) and Jetson NX (aarch64)
- GPU
- Tested in Jetson NX and Android
- NNAPI(CPU, GPU, DSP)
- Tested in Android (Pixel 4a)
- Edge TPU
./main [input]
- input = blank
- use the default image file set in source code (main.cpp)
- e.g. ./main
- input = *.mp4, *.avi, *.webm
- use video file
- e.g. ./main test.mp4
- input = *.jpg, *.png, *.bmp
- use image file
- e.g. ./main test.jpg
- input = number (e.g. 0, 1, 2, ...)
- use camera
- e.g. ./main 0
- OpenCV 4.x
-
Get source code
git clone https://github.com/iwatake2222/play_with_tflite.git cd play_with_tflite git submodule update --init --recursive --recommend-shallow --depth 1 cd InferenceHelper/third_party/tensorflow chmod +x tensorflow/lite/tools/make/download_dependencies.sh tensorflow/lite/tools/make/download_dependencies.sh
-
Download prebuilt libraries
- Download prebuilt libraries (third_party.zip) from https://github.com/iwatake2222/InferenceHelper/releases/ (<- Not in this repository)
- Extract it to
InferenceHelper/third_party/
-
Download models
- Download models (resource.zip) from https://github.com/iwatake2222/play_with_tflite/releases/
- Extract it to
resource/
- Configure and Generate a new project using cmake-gui for Visual Studio 2017 64-bit
Where is the source code
: path-to-play_with_tflite/pj_tflite_cls_mobilenet_v2 (for example)Where to build the binaries
: path-to-build (any)
- Open
main.sln
- Set
main
project as a startup project, then build and run!
Note for debug
Running with Debug
causes exception, so use Release
or RelWithDebInfo
in Visual Studio.
Note for EdgeTPU in Windows
- Install
edgetpu_runtime_20210119.zip
- Execution failed with
edgetpu_runtime_20210726.zip
for some reasons in my environment - If you have already installed
edgetpu_runtime_20210726.zip
, uninstall it. Also uninstallUsbDk Runtime Libraries
from Windows. Then isntalledgetpu_runtime_20210119.zip
- Execution failed with
- Delete
C:\Windows\System32\edgetpu.dll
so that your project uses the created edgetpu.dll- or copy the created edgetpu.dll to
C:\Windows\System32\edgetpu.dll
- or copy the created edgetpu.dll to
cd pj_tflite_cls_mobilenet_v2 # for example
mkdir build && cd build
cmake ..
make
./main
Note for EdgeTPU
cp libedgetpu.so.1.0 libedgetpu.so.1
sudo LD_LIBRARY_PATH=./ ./main
# Edge TPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
cp libedgetpu.so.1.0 libedgetpu.so.1
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
sudo LD_LIBRARY_PATH=./ ./main
# you may get "Segmentation fault (core dumped)" without sudo
# GPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
# you may need `sudo apt install ocl-icd-opencl-dev` or `sudo apt install libgles2-mesa-dev`
# XNNPACK
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=on
# NNAPI (Note: You use Android for NNAPI. Therefore, you will modify CMakeLists.txt in Android Studio rather than the following command)
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_NNAPI=on
You also need to select framework when calling InferenceHelper::create
.
-
Requirements
- Android Studio
- Compile Sdk Version
- 30
- Build Tools version
- 30.0.0
- Target SDK Version
- 30
- Min SDK Version
- 24
- With 23, I got the following error
bionic/libc/include/bits/fortify/unistd.h:174: undefined reference to
__write_chk'`- android/ndk#1179
- Compile Sdk Version
- Android NDK
- 21.3.6528147
- OpenCV
- opencv-4.4.0-android-sdk.zip
- *The version is just the version I used
- Android Studio
-
Configure NDK
- File -> Project Structure -> SDK Location -> Android NDK location
- C:\Users\abc\AppData\Local\Android\Sdk\ndk\21.3.6528147
- File -> Project Structure -> SDK Location -> Android NDK location
-
Import OpenCV
- Download and extract OpenCV android-sdk (https://github.com/opencv/opencv/releases )
- File -> New -> Import Module
- path-to-opencv\opencv-4.3.0-android-sdk\OpenCV-android-sdk\sdk
- FIle -> Project Structure -> Dependencies -> app -> Declared Dependencies -> + -> Module Dependencies
- select sdk
- In case you cannot import OpenCV module, remove sdk module and dependency of app to sdk in Project Structure
- Do
git update-index --skip-worktree ViewAndroid/app/build.gradle ViewAndroid/settings.gradle ViewAndroid/.idea/gradle.xml
not to save modified settings including opencv sdk
- Do
-
Modify
ViewAndroid\app\src\main\cpp\CMakeLists.txt
to call image processor function you want to use.set(ImageProcessor_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../../pj_tflite_arprobe/ImageProcessor")
-
Copy
resource
directory to/storage/emulated/0/Android/data/com.iwatake.viewandroidtflite/files/Documents/resource
(<- e.g.) . The directory will be created after running the app (so the first run should fail because model files cannot be read) -
Note : By default,
InferenceHelper::TENSORFLOW_LITE
is used. You can modifyViewAndroid\app\src\main\cpp\CMakeLists.txt
to select which delegate to use. It's better to useInferenceHelper::TENSORFLOW_LITE_GPU
to get high performance.
By default, NNAPI will select the most appropreate accelerator for the model. You can specify which accelerator to use by yourself. Modify the following code in InferenceHelperTensorflowLite.cpp
// options.accelerator_name = "qti-default";
// options.accelerator_name = "qti-dsp";
// options.accelerator_name = "qti-gpu";
Pre-built libraries are stored in InferenceHelper/ThirdParty/tensorflow_prebuilt
and InferenceHelper/ThirdParty/edgetpu_prebuilt
.
If you want to build them by yourself, please use the following instruction:
- v2.4.0
- v2.6.0
- play_with_tflite
- https://github.com/iwatake2222/play_with_tflite
- Copyright 2020 iwatake2222
- Licensed under the Apache License, Version 2.0
- This project utilizes OSS (Open Source Software)
- This project utilizes models from other projects:
- Please find
model_information.md
in resource.zip
- Please find