This project is based upon the state-of-the-art YOLO series, Tiny and the famous Kalman Filter. With both camera and object moving, our system is able to track the target robustly in a 3D world. The UAV would then try to maneuver with the dynamic object once it detects the movement from the target. The link to this journal paper could be seen here.
The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving autonomous surveillance, in which the UAV can be of assistance in autonomously detecting, tracking, and following a target object without human intervention. Specifically, we adopted the YOLOv4-Tiny algorithm for semantic object detection and then consolidated it with a 3D object pose estimation method and Kalman Filter to enhance the perception performance. In addition, a back-end UAV path planning for surveillance maneuver is integrated to complete the fully autonomous system. The perception module is assessed on a quadrotor UAV, while the whole system is validated through flight experiments. The experiment results verified the robustness, effectiveness, and reliability of the autonomous object tracking UAV system in performing surveillance tasks. The source code is released to the research community for future reference.
- We have validated our system on Ubuntu 18.04 ubuntu release
- Installation of ROS: ROS or Melodic: ROS Install
- Requires OpenCV >= 4.4: OpenCV Linux Install
- Python 3.8
- CUDNN >= 7.0: cuDNN Archive
- Realsense libraries librealsense github
- Realsense ros-wrapper, we suggest users to build from source ros wapper
- YOLO Darknet here: Darknet
- Suggested dataset scale: 2000 images per class && corresponding 500 for validation, and 2000 background images without object in the FoV
- Labelling tool: labelimg && labelimg repo
-
2023 update
To save you some time, if you are only looking for
YOLO IN ROS
, please go directly to this repo. -
clone our repository into working space
cd ~/xx_ws/src
git clone https://github.com/PAIR-Lab/AUTO.git
- Modify
Go to here if Cuda Available
//uncomment the below if CUDA available
//this->mydnn.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
//this->mydnn.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
and here
//change yolo custom weight file location, as well as the cfg file and name file
- Compile
cd ~/xx_ws
catkin_make
- Run
rosrun offb camera && rosrun offb track
#or can just write a launch file
@article{lo2021dynamic,
title={Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications},
author={Lo, Li-Yu and Yiu, Chi Hao and Tang, Yu and Yang, An-Shik and Li, Boyang and Wen, Chih-Yung},
journal={Sensors},
volume={21},
number={23},
pages={7888},
year={2021},
publisher={MDPI}
}
Patrick Li-yu LO: liyu.lo@connect.polyu.hk
Summer Chi Hao Yiu: chi-hao.yiu@connect.polyu.hk
Bryant Yu Tang: bryant.tang@connect.polyu.hk