Skip to content

I use YOLO-tiny and KF for an UAV to track and follow an object.

License

Notifications You must be signed in to change notification settings

HKPolyU-UAV/AUTO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AUTO (an "A"utonomous "U"av that "T"racks "O"bject)

Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications

This project is based upon the state-of-the-art YOLO series, Tiny and the famous Kalman Filter. With both camera and object moving, our system is able to track the target robustly in a 3D world. The UAV would then try to maneuver with the dynamic object once it detects the movement from the target. The link to this journal paper could be seen here.

Abstract

The ever-burgeoning growth of autonomous unmanned aerial vehicles (UAVs) has demonstrated a promising platform for utilization in real-world applications. In particular, UAV equipped with a vision system could be leveraged for surveillance applications. This paper proposes a learning-based UAV system for achieving autonomous surveillance, in which the UAV can be of assistance in autonomously detecting, tracking, and following a target object without human intervention. Specifically, we adopted the YOLOv4-Tiny algorithm for semantic object detection and then consolidated it with a 3D object pose estimation method and Kalman Filter to enhance the perception performance. In addition, a back-end UAV path planning for surveillance maneuver is integrated to complete the fully autonomous system. The perception module is assessed on a quadrotor UAV, while the whole system is validated through flight experiments. The experiment results verified the robustness, effectiveness, and reliability of the autonomous object tracking UAV system in performing surveillance tasks. The source code is released to the research community for future reference.

Video

IMAGE ALT TEXT HERE

Requirements

Dataset Establishment and Training

  • YOLO Darknet here: Darknet
  • Suggested dataset scale: 2000 images per class && corresponding 500 for validation, and 2000 background images without object in the FoV
  • Labelling tool: labelimg && labelimg repo

Clone our work! (on ubuntu)

  1. 2023 update

    To save you some time, if you are only looking for YOLO IN ROS, please go directly to this repo.

  2. clone our repository into working space

cd ~/xx_ws/src
git clone https://github.com/PAIR-Lab/AUTO.git
  1. Modify
    Go to here if Cuda Available
//uncomment the below if CUDA available
    //this->mydnn.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
    //this->mydnn.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);

       and here

//change yolo custom weight file location, as well as the cfg file and name file
  1. Compile
cd ~/xx_ws
catkin_make
  1. Run
rosrun offb camera && rosrun offb track
#or can just write a launch file

Cite Us

@article{lo2021dynamic,
  title={Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications},
  author={Lo, Li-Yu and Yiu, Chi Hao and Tang, Yu and Yang, An-Shik and Li, Boyang and Wen, Chih-Yung},
  journal={Sensors},
  volume={21},
  number={23},
  pages={7888},
  year={2021},
  publisher={MDPI}
}

Maintainer

Patrick Li-yu LO: liyu.lo@connect.polyu.hk
Summer Chi Hao Yiu: chi-hao.yiu@connect.polyu.hk 
Bryant Yu Tang: bryant.tang@connect.polyu.hk