ROS wrapper for OpenPose | It supports (currently, but others are planned)-
- Any color camera such as webcam etc ✔️
- Intel RealSense Camera ✔️
- Microsoft Kinect v2 Camera ✔️
- Stereolabs ZED2 Camera ✔️ (see thanks section)
- Azure Kinect Camera ✔️
Sample video showing visualization on RViz
- Dependencies
- Installation
- Configuration
- Operation Modes and APIs
- Camera Run Instructions:
- FAQ
- Test Configuration
- Citation
- Issues
- Thanks
-
Supported OpenPose Versions:
- 1.7.0 latest (see point #1 in troubleshooting section)
- 1.6.0 (see thanks section)
- 1.5.1
- 1.5.0
Note: Additionally, camera-specific ROS drivers such as the following are required as per your camera model-
- realsense-ros: For Intel RealSense Camera
- iai_kinect2: For Microsoft Kinect v2 Camera
- zed-ros-wrapper: For Stereolabs ZED2 Camera
- azure_kinect_ros_driver: For Azure Kinect Camera
- Make sure to download the complete repository. Use
git clone https://github.com/ravijo/ros_openpose.git
or download zip as per your convenience. - Invoke catkin tool inside ros workspace, i.e.,
catkin_make
- Make python scripts executable by using the commands below-
roscd ros_openpose/scripts chmod +x *.py
- While compiling the package, if the following error is reported at the terminal-
In this case, please checkout OpenPose version 1.7.0 by running the following command at the root directory of the OpenPose installation-
error: no matching function for call to ‘op::WrapperStructPose::WrapperStructPose(<brace-enclosed initializer list>)’
git checkout tags/v1.7.0
- While compiling the package, if any of the following error is reported at the terminal-
In this case, please checkout OpenPose version 1.6.0 by running the following command at the root directory of the OpenPose installation-
error: ‘check’ is not a member of ‘op’ error: no match for ‘operator=’ (operand types are ‘op::Matrix’ and ‘const cv::Mat’) error: invalid initialization of reference of type ‘const op::String&’ from expression of type ‘fLS::clstring {aka std::__cxx11::basic_string<char>}’
Do not forget to rungit checkout tags/v1.6.0
sudo make install
to install the OpenPose system-wide. - If compilation fails by showing the following error-
In this case, please put the following by editing the CMakeLists.txt
/usr/bin/ld: cannot find -lThreads::Threads
For more information, please check here.find_package(Threads REQUIRED)
While compiling the package, if the following error is reported at the terminal-Note that OpenPose version 1.5.1 is still supported.In this case, please update the OpenPose. Most likely, an old version of OpenPose is installed. So please checkout Openpose from the master branch as described here. Alternatively, you can checkout OpenPose version 1.5.1 by running the following command at the root directory of the OpenPose installation-error: no match for ‘operator=’ (operand types are ‘op::Matrix’ and ‘const cv::Mat’)
Do not forget to rungit checkout tags/v1.5.1
sudo make install
to install the OpenPose system-wide.
The main launch file is run.launch
. It has the following important arguments-
model_folder
: It represents the full path to the model directory of OpenPose. Kindly modify it as per OpenPose installation in your machine. Please editrun.launch
file as shown below-<arg name="openpose_args" value="--model_folder /home/ravi/openpose/models/"/>
openpose_args
: It is provided to support the standard OpenPose command-line arguments. Please editrun.launch
file as shown below-<arg name="openpose_args" value="--face --hand"/>
camera
: It can only be one of the following:realsense
,kinect
,zed2
,nodepth
. Default value of this argument isrealsense
. See below for more information.
- Synchronous API (see thanks section)
- Uses
op_wrapper.emplaceAndPop()
method provided by OpenPose - By default this version is disabled. Therefore, please set
synchronous:=true
and providepy_openpose_path
while callingrun.launch
. For example:roslaunch ros_openpose run.launch camera:=realsense synchronous:=true py_openpose_path:=absolute_path_to_py_openpose
- If the arg
py_openpose_path
is not specified, then the CPP node is used. Otherwise, the python node is used. Therefore, please compile OpenPose accordingly if you plan to use python bindings of the OpenPose.
- Uses
- Asynchronous API
- Uses two workers,
op::WorkerProducer
andop::WorkerConsumer
workers provided by OpenPose - Uses OpenPose CPP APIs
- By default this version is enabled. Users are advised to try
synchronous:=true
if not satisfied with the performance.
- Uses two workers,
In this section, you will find the instructions for running ros_openpose with one of the following cameras: Color camera, Realsense, Kinect v2, Azure Kinect, and ZED2. If you have a different camera and would like to use ros_openpose with depth properties, please turn to the FAQ section for tips and guidance on achieving this.
- Make sure that ROS env is sourced properly by executing the following command-
source devel/setup.bash
- Start the ROS package of your camera. Basically, this package is going to capture images from your camera, and then it is going to publish those images on a ROS topic. Make sure to set the correct ROS topic to
color_topic
inside config_nodepth.launch file. - Invoke the main launch file by executing the following command-
roslaunch ros_openpose run.launch camera:=nodepth
Note: To confirm that ROS package of your camera is working properly, please check if the ROS package is publishing images by executing the following command-
rosrun image_view image_view image:=YOUR_ROSTOPIC
Here YOUR_ROSTOPIC
must have the same value as color_topic
.
- Make sure that ROS env is sourced properly by executing the following command-
source devel/setup.bash
- Invoke the main launch file by executing the following command-
roslaunch ros_openpose run.launch
- Make sure that ROS env is sourced properly by executing the following command-
source devel/setup.bash
- Invoke the main launch file by executing the following command-
roslaunch ros_openpose run.launch camera:=kinect
- Make sure that ROS env is sourced properly by executing the following command-
source devel/setup.bash
- Invoke the main launch file by executing the following command-
roslaunch ros_openpose run.launch camera:=azurekinect
- Change the parameter
openni_depth_mode
in zed-ros-wrapper/zed_wrapper/params/common.yaml totrue
(default isfalse
). - Make sure that ROS env is sourced properly by executing the following command-
source devel/setup.bash
- Invoke the main launch file by executing the following command-
roslaunch ros_openpose run.launch camera:=zed2
-
How to add my own depth camera into this wrapper?
You might be able to add your own depth camera by creating your own config_<camera_name>.launch file based on one of the existing ones and modify it to suit your specific camera. Go inside the
launch
subdirectory and make a copy ofconfig_realsense.launch
and save it asconfig_<camera_name>.launch
. Remember that whatever you choose as the camera_name, should be used as an argument when launching the run.launch to run ros_openpose. Make necessary changes to thecolor_topic
,depth_topic
,cam_info_topic
, andframe_id
arguments in the file. Make sure that:- Input depth images are aligned to the color images already.
- Depth and color images have the same dimension. Therefore, each pixel from the color image can be mapped to its corresponding depth pixel at the same x, y location.
- The depth images contain depth values in millimeters and are represented by
TYPE_16UC1
using OpenCV. - The
cam_info_topic
is containing camera calibration parameters supplied by the manufacturer.
To achieve visualizations, you also need to create new modified versions of the rviz scripts only_person_<camera_name>.rviz and person_pointcloud_<camera_name>.rviz.
Please check here for a similar question.
If you successfully create modified files and run ros_openpose with a depth camera that is not mentioned here, please share your files and the necessary steps for running with your camera. This useful information can be made available to others.
-
How to run this wrapper with limited resources such as low GPU, RAM, etc.?
Below is a brief explanation of the
ros_openpose
package. This package does not use GPU directly. However, it depends onOpenPose
, which uses GPU heavily. It contains a few ROS subscribers, which copies data from the camera using ROS. Next, it employs two workers, namely input and output workers. The job of the input worker is to provide color images to theOpenPose
, whereas the role of the output worker is to receive the keypoints detected in 2D (pixel) space. The output worker then converts 2D pixels to 3D coordinates. The input worker waits for 10 milliseconds if the camera provides no new frames, and then it checks again if no new frame is available. If not, then wait for 10 milliseconds, and the cycle continues. In this way, we ensure that the CPU gets some time to sleep (indirectly lowering the CPU usage).- If the CPU usage are high, try increasing the sleep value (
SLEEP_MS
) as defined here. - Try reducing the
--net_resolution
and by using--model_pose COCO
. - Try disabling multithreading in OpenPose software simply by supplying
--disable_multi_thread
toopenpose_args
insiderun.launch
file. - Another easiest way is to decrease the FPS of your camera. Please try to lower it down as per your limitations.
Please check here for a similar question.
- If the CPU usage are high, try increasing the sleep value (
-
How to find the version of the OpenPose installed on my machine?
Please use the shell script get_openpose_version.sh as shown below-
sh get_openpose_version.sh
You can use
cmake
as well. See here
This package has been tested on the following environment configuration-
Name | Value |
---|---|
OS | Ubuntu 14.04.6 LTS (64-bit) |
RAM | 16 GB |
Processor | Intel® Core™ i7-7700 CPU @ 3.60GHz × 8 |
Kernel | Version 4.4.0-148-generic |
ROS | Indigo |
GCC | Version 5.5.0 |
OpenCV | Version 2.4.8 |
OpenPose | Version 1.5.1 |
GPU | GeForce GTX 1080 |
CUDA | Version 8.0.61 |
cuDNN | Version 5.1.10 |
If you used ros_openpose
for your work, please cite it.
@misc{ros_openpose,
author = {Joshi, Ravi P. and Choi, Andrew and Tan, Xiang Zhi and Van den Broek, Marike K and Luo, Rui and Choi, Brian},
title = {{ROS OpenPose}},
year = {2019},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/ravijo/ros_openpose}}
}
Please check here and create issues accordingly.
Following authors are sincerely acknowledged for the improvements of this package-
- Andrew Choi: For providing synchronous version i.e.,
op_wrapper.emplaceAndPop()
support for OpenPose 1.6 - Xiang Zhi Tan: For providing compatibility for OpenPose 1.6
- Marike Koch van den Broek: For adding support for Stereolabs ZED2 Camera
- Rui Luo: For fixing a crash in ros_openpose_synchronous.py when nobody or only partial body is visible
- Brian Choi: Fixing gflags library issue causing compilation error
- RiRyuichi: For providing support for face keypoints