This folder contains code and instruction to run DroNet code on a Bebop drone.
The bridge between the DroNet Keras code and the Bebop control is implemented in ROS.
It is necessary for you to install ROS to have the basic tools available. The project was tested under ROS indigo, but you can use any other version without problems.
The folder containing all the related code for a project is usually defined as `workspace'. Create your own workspace following these instructions and call it ```bebop_ws'''.
In this step, we will bridge our ROS workspace with the bebop drone. To do it, just follow these instructions.
Be sure to read properly all the instructions about how to run the driver, how to send commands and how to read data from the drone available on the website!
You will need a modified parameter file to launch the bebop autonomy package for dronet. You can find it in here: outdoor.yaml.
After we're done with the bridge between ROS and Bebop, we now need to connect the DroNet Keras code to ROS. This is again very easy to do, since ROS has both a cpp and python interface.
You can find code to do this step in the folder dronet_perception. Add this folder to your workspace:
mkdir ~/bebop_ws/dronet
cp -r YOUR_PATH/dronet_perception ~/bebop_ws/dronet
Now build the package:
cd ~/bebop_ws/dronet/dronet_perception
catkin build --this
It is now time to have an interface that converts the output of DroNet to control commands for the Bebop drone. This is implemented in the folder dronet_control. Again, build this folder into your workspace.
cp -r YOUR_PATH/dronet_control ~/bebop_ws/dronet
cd ~/bebop_ws/dronet/dronet_perception
catkin build --this
To make sure that everything is as expected, try to run some tests. Example:
- See if you can connect to the drone
cd ~/bebop_ws/dronet/dronet_perception/launch
roslaunch bebop_launch.launch
-
See if you can receive images from it with rqt_img_view
-
See if you can publish control commands through the terminal
-
Try to run the DroNet network:
cd ~/bebop_ws/dronet/dronet_perception/launch
roslaunch dronet_launch.launch
There are three basic step to perform a flight with DroNet:
-
Launch the perception and control pipeline.
-
Start the drone and feed-through computed commands.
-
Land the drone.
There are two options for implementing this last step:
-
Publish high level commands through terminal [NOT RECOMMENDED]
-
Implement your own Graphical Interface Unit GUI
DroNet directly produces flying commands for the Bebop drone. Closely supervise the robot at all times, especially when running code for the first time. There might be some parameters in the Bebop you have to set to have good performance.
First, connect to the drone and launch the perception pipeline:
cd ~/bebop_ws/dronet/dronet_perception/launch
roslaunch full_perception_launch.launch
Then launch the control pipeline:
cd ~/bebop_ws/dronet/dronet_control/launch
roslaunch deep_navigation.launch
Start the Bebop:
rostopic pub --once /bebop/takeoff std_msgs/Empty
Enable Control from DroNet (Be cautious: The drone will be autonomous from now on)
rostopic pub --once /bebop/state_change std_msgs/Bool "data: true"
(Optional) Deactivate Control from DroNet:
rostopic pub --once /bebop/state_change std_msgs/Bool "data: false"
Land the Bebop:
rostopic pub --once /bebop/land std_msgs/Empty
It is generally recommended to implement a GUI(http://wiki.ros.org/rqt) to perform all the aforementioned steps. In this way, it will be easier to command the drone, and perform quickly many experiments. The steps to do remain the same as in the previous section.
There might be some variables you would like to tune to get the maximal performance out of your drone. A comprehensive list can be found on the official documentation page.
The most important parameters are SpeedSettingsMaxRotationSpeedCurrent, PilotingSettingsMaxTiltCurrent, SpeedSettingsOutdoorOutdoor.
Additionally, you might want to tune the control pipeline accordingly. To do this, modify the file deep_navigation.launch. Here, the most important parameter to tune is critical_prob. For low values, the drone will be very conservative. For high values, it will stop only very close to obstacles.