-
Notifications
You must be signed in to change notification settings - Fork 29
CMP3103 Week 3
Make sure you keep all your code you develop in the workshops and also note down any commands you used (create a README.md
file for your notes). A good idea is to keep all this in your own GitHub repository (even share within your group). You will need this again as you go along in this module.
In the lecture, you have been introduced to ways to conduct image processing in Python using OpenCV. In this workshop, you shall learn how to.
- retrieve images from ROS topics
- convert images into the OpenCV format
- perform image processing operations on the image
- (optionally) command the robot based on your image processing output
To get you off the ground quickly, all the source code shown in the lecture is available online. In particular, have a look at
-
opencv_intro.py
which shows you how to load an image in OpenCV without ROS and some processing methods. Also look at the official OpenCV Python tutorials to gain a better understanding. Image processing tutorials, colour slicing, finding contours etc. -
opencv_bridge.py
showing you how to use CvBridge to read image from a ROS topic. -
colour_contours.py
to get an idea about colour slicing as introduced in the lecture. Also read about Changing Colour Spaces.
Develop Python code with the following abilities:
- Take the example code fragment
opencv_bridge.py
from the lecture and modify it so you can read from the camera of your turtlebot. - Read images from your robot, display them using OpenCV methods, and try out colour slicing as presented in the lecture to segment a coloured object of your choice. When trying this in simulation, put some nice coloured objects in front of the robot. Find suitable parameters to robustly segment that blob. You may take
colour_contours.py
as a starting point for your implementation. - Use the output from above to then publish
std_msgs/String
messages on aPublisher
that contains information about the outcome of the operation (e.g. print the mean value of the pixel intensities of the resulting image). (Hint: You'll need to create a Publisher with typestd_msgs/String
for this:self.publisher = self.create_publisher(String, '/msgs', 1)
and then publish to it.
Try out the colour chasing robot presented in the lecture. Take the code from colour_chaser.py
you will need to find suitable parameters and colour spaces to segment out relevant colours and make the robot move towards the object you want it to follow. Place new objects and move different objects around in the environment and see if the robot will chase the objects. The turtlebots can also be launched in a empty world to make finding a coloured object a bit simpler ros2 launch turtlebot3_gazebo empty_world.launch.py
.
- Develop Python code that subscribes to the image stream from the robot.
- Publish the output of some image processing as a
std_msgs/String
on a topic named/result_topic
. - Run the colour chasing robot and understand its code.
Also, browse through this collection of useful resources beyond what has been presented in the lecture: OpenCV and ROS
Copyright by Lincoln Centre for Autonomous Systems