Skip to content

Latest commit

 

History

History

perception

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Gimbal Perception

The following package allows to identify the sky line and a ground plane from outdoor images in real time. It takes a stabilized image as a reference and detects the movement of subsequent frames to send the signal to the servos.

1. Prerequisites

Download and install the repositories HELLO AI WORLD NVIDIA JETSON for running the segmentation neural network and ROS DEEP LEARNING for ROS1 interface. Focus mainly on Semantic Segmentation with SegNet and how to run a demo with a Semantic Segmentation live camera. A Raspberry Pi camera (MIPI CSI camera) was used to record our demo video. The pretrained network model is in the model folder. It was used Skyfinder Dataset as dataset.

2. Build gimbal_perception package

Clone the respository in your default environment folder (in this example catkin_ws):

  cd ~/catkin_ws/src
  git clone https://github.com/hl49/uav_gimbal.git
  cd ../
  catkin_make
  source ~/catkin_ws/devel/setup.bash

2.1 Include modified pretrained network:

2.1.1 Copy the content from the model folder into the jetson-inference/python/examples directory.

2.1.2 Overwrite the segnet.ros1.launch file located in the ros_deep_learning package with the content of the new segnet.ros1.launch file included in the launch folder.

3. Running the Gimbal Perception Node

3.1 Run the live camera segmentation network:

roslaunch ros_deep_learning segnet.ros1.launch

3.2 Run the perception node:

rosrun perception perception_node.py