Library for modelling performance costs of different Neural Network workloads on NPU devices
-
Updated
Jun 13, 2024 - C++
Library for modelling performance costs of different Neural Network workloads on NPU devices
A ROS package for offloading inference to the Intel Movidius VPU Neural Compute Stick. Also, Evaluation of the Intel Movidius VPU Neural Compute Stick for Real-Time Inference.
Add a description, image, and links to the vpu topic page so that developers can more easily learn about it.
To associate your repository with the vpu topic, visit your repo's landing page and select "manage topics."