Skip to content
Marc Hanheide edited this page Feb 14, 2024 · 28 revisions

Week 3: Computer Vision with ROS and OpenCV

Make sure you keep all your code you develop in the workshops and also note down any commands you used (create a README.md file for your notes). You will need this again as you go along in this module.

In the lecture, you have been introduced to ways to conduct image processing in Python using OpenCV. In this workshop, you shall learn how to use it practically.

  1. retrieve images from ROS topics
  2. convert images into the OpenCV format
  3. perform image processing operations on the image
  4. command the robot based on your image processing output

To get you off the ground quickly, all the source code shown in the lecture is available online. In particular, have a look at

Tasks

Task 0: Prepare your environment

To make sure you have the latest version of the devcontainer, start by pulling the latest version: docker pull lcas.lincoln.ac.uk/lcas/devcontainer/ros2-teaching:2324-devel. Hopefully, by now you know how to start the devcontainer in VSCode. If you have questions on how to do that, manage your git repository, and start the simulation, make sure you get support during the workshop immediately!

Task 1: OpenCV with Python and ROS

Develop Python code with the following abilities:

  1. Take the example code fragment opencv_bridge.py from the lecture and modify it so you can read from the camera of your turtlebot.
  2. Read images from your robot, display them using OpenCV methods, and try out colour slicing as presented in the lecture to segment a coloured object of your choice. When trying this in simulation, put some nice coloured objects in front of the robot (in Gazebo objects can be added in the insert tab and under the models.gazebosim.org list at the bottom, this may a few minutes to load all the objects). Find suitable parameters to robustly segment that blob. You may take colour_contours.py as a starting point for your implementation.
  3. Use the output from above to then publish std_msgs/String messages on a Publisher that contains information about the outcome of the operation (e.g. print the mean value of the pixel intensities of the resulting image). (Hint: You'll need to create a Publisher with type std_msgs/String for this: self.publisher = self.create_publisher(String, '/msgs', 1) and then publish to it.

Task 2: Colour Chasing Robot

Try out the colour chasing robot presented in the lecture. Take the code from colour_chaser.py you will need to find suitable parameters and colour spaces to segment out relevant colours and make the robot move towards the object you want it to follow. Place new objects and move different objects around in the environment and see if the robot will chase the objects. Feel free to change the simulation environment to your liking!

Task 3: First steps with the real robot

To summarise this week

  1. Develop Python code that subscribes to the image stream from the robot.
  2. Publish the output of some image processing as a std_msgs/String on a topic named /result_topic.
  3. Run the colour-chasing robot and understand its code.