Skip to content

Working on the RPI0

Latest
Compare
Choose a tag to compare
  1. Download both files below to your RPI 0.
$ wget https://github.com/ricardodeazambuja/libedgetpu/releases/download/rpi0_tflite_edgetpu/libedgetpu.so.1.0
$ wget https://github.com/ricardodeazambuja/libedgetpu/releases/download/rpi0_tflite_edgetpu/tflite_runtime-2.5.0-cp37-cp37m-linux_armv6l.whl
  1. Install them:
$ sudo mv libedgetpu.so.1.0 /usr/lib/arm-linux-gnueabihf/.
$ sudo ln -s /usr/lib/arm-linux-gnueabihf/libedgetpu.so.1.0 /usr/lib/arm-linux-gnueabihf/libedgetpu.so.1
$ sudo pip3 install tflite_runtime-2.5.0-cp37-cp37m-linux_armv6l.whl --upgrade
  1. Test by running this piece of code:
import requests

try:
    import tflite_runtime.interpreter as tflite
except ModuleNotFoundError:
    print("Did you install the TFLite Runtime? \
    https://github.com/ricardodeazambuja/libedgetpu-rpi0/releases/tag/rpi0_tflite_edgetpu")


EDGETPU_SHARED_LIB = 'libedgetpu.so.1'

#
# Download a model used by https://github.com/google-coral/example-object-tracker
#
url = 'https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite'
r = requests.get(url, allow_redirects=True)
open('mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite', 'wb').write(r.content)

edgetpu_model_file = 'mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite'


#
# EdgeTPU Accelerator
#
device = [] # I have only one USB accelerator...
tflite_interpreter = tflite.Interpreter(model_path=edgetpu_model_file, 
                                        experimental_delegates=[tflite.load_delegate(EDGETPU_SHARED_LIB,{'device': device[0]} if device else {})])
tflite_interpreter.allocate_tensors()

input_details = tflite_interpreter.get_input_details()
output_details = tflite_interpreter.get_output_details()

print("INPUT:\n", input_details)
print("OUTPUT:\n", output_details)

It should print:

INPUT:
 [{'name': 'normalized_input_image_tensor', 'index': 103, 'shape': array([  1, 300, 300,   3]), 'shape_signature': array([  1, 300, 300,   3]), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0078125, 128), 'quantization_parameters': {'scales': array([0.0078125], dtype=float32), 'zero_points': array([128]), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
OUTPUT:
 [{'name': 'TFLite_Detection_PostProcess', 'index': 95, 'shape': array([ 1, 20,  4]), 'shape_signature': array([ 1, 20,  4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'TFLite_Detection_PostProcess:1', 'index': 96, 'shape': array([ 1, 20]), 'shape_signature': array([ 1, 20]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'TFLite_Detection_PostProcess:2', 'index': 97, 'shape': array([ 1, 20]), 'shape_signature': array([ 1, 20]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'TFLite_Detection_PostProcess:3', 'index': 98, 'shape': array([1]), 'shape_signature': array([1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

If you want to check how I managed to compile everything, have a look at the rpi0 branches of my forks:

and my latest blog post:

Update (10/01/2020):
I was having problems while using a (multi)posenet model on my RPI0. If my script was left running by itself, it would soon crash with the error Deadline exceeded: USB transfer error 2 [LibUsbDataOutCallback]. At first, I thought it was a classic power issue, but after connecting my USB power meter and learning the RPI0 connects the power supply (5V) directly to the microUSB, I gave up going down that rabbit hole. After checking dmesg (dwc2_hc_halt() Channel can't be halted) and asking google for help, I solved the problem by removing dtoverlay=dwc2 from /boot/config.txt. For an explanation on what the dwc2 does, have a look here.
Update (13/04/2020):
I tried to generate a package with the latest TensorFlow (master / 2.6), but I didn't have enough time to sort the problems that were introduced because some things changed. So, I will stick to this version (2.5.0) for now...