-
Notifications
You must be signed in to change notification settings - Fork 29
CMP3103 Week 3
In the lecture, you have been introduced to ways to conduct image processing in Python using OpenCV. In this workshop, you shall learn how to
- retrieve images from ROS topics (both simulator and optionally from the real robot)
- convert images into the OpenCV format
- perform image processing operations on the image
- (optionally) command the robot based on your image processing output
To get you off the ground quickly, all the source code shown in the lecture is available online. In particular, have a look at
-
opencv_intro.py
which shows you how to load an image in OpenCV without ROS and address it. Also look at the official OpenCV Python tutorials to gain a better understanding -
opencv_bridge.py
showing you how to use CvBridge to read image from a topic. -
color_contours.py
to get an idea about colour slicing as introduced in the lecture. Also read about Changing Colour Spaces.
Make sure you call in a demonstrator to show your achievements to gain those marks
-
Develop Python code with the following abilities:
- Take the example code fragment
opencv_bridge.py
from the lecture and modify it so you can read from the camera of your (simulated and real) turtlebots. - read images from your (real and simulated) robot, display them using OpenCV methods, and try out colour slicing as presented in the lecture to segment a coloured object of your choice, both, in simulation or in reality. When trying this in simulation, put some nice coloured objects in front of the robot. Find suitable parameters to robustly segment that blob. You may take [
color_contours.py
] (https://github.com/LCAS/teaching/blob/kinetic/cmp3103m-code-fragments/scripts/color_contours.py) as a starting point for your implementation. - Use the output from above to then publish
std_msgs/String
messages on aPublisher
that contains information about the outcome of the operation (e.g. print the mean value of the pixel intensities of the resulting image). (Hint: You'll need to create a Publisher with typestd_msgs/String
for this:p=rospy.Publisher('/result_topic', String)
and then publish to it.
Make sure to show your working code to demonstrators, having it working both in simulation and on the robot. Be prepared to discuss the differences you observe in simulation and reality.
- Take the example code fragment
-
(Optional) Research about Hough Transform and see how it can be used to detect lines with OpenCV for Python. Understand the concepts of Hough transform from your research and then also look at the circle detection code in
hough_circle.py
. Make it work with actual image data received via a topic from your (simulated/real) robot. -
(Optional) Try out the "followbot" presented in the lecture. Take the code from https://github.com/marc-hanheide/ros_book_sample_code/tree/master/chapter12 described in chapter 12 of the "Programming Robots with ROS" book, available also on blackboard. Note: Make sure you allow the simulation to find the additional resources by first running
export GAZEBO_RESOURCE_PATH=$GAZEBO_RESOURCE_PATH:`pwd`
(when in the directory of chapter 12) in the terminal you then runroslaunch chapter12 course.launch
in afterwards.
Also, browse through this collection of useful resources beyond what has been presented in the lecture in B3: OpenCV and ROS
Copyright by Lincoln Centre for Autonomous Systems