Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

segmentation_pointcloud_fusion can't work proper with multiple mask #8460

Closed
3 tasks done
StepTurtle opened this issue Aug 13, 2024 · 3 comments
Closed
3 tasks done
Assignees
Labels
component:perception Advanced sensor data processing and environment understanding. (auto-assigned)

Comments

@StepTurtle
Copy link
Contributor

StepTurtle commented Aug 13, 2024

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

I want to use segmentation_pointcloud_fusion node with 6 cameras and I have a problem. When I have multiple cameras and multiple /perception/object_recognition/detection/mask? outputs from autoware_tensorrt_yolox, my expectation is to see point cloud which fused from all cameras. However I just see the point cloud fused from a single camera, especially the last camera from this loop

In the following loop located in the FusionNode<TargetMsg3D, Obj, Msg2D>::subCallback() function, the output point cloud data is cleared at every step, and the points deleted in the previous step are restored.

for (std::size_t roi_i = 0; roi_i < rois_number_; ++roi_i) {

    // ...

    fuseOnSingleImage(
              *input_msg, roi_i, *((cached_roi_msgs_.at(roi_i))[matched_stamp]),
              camera_info_map_.at(roi_i), *output_msg);

    // ...

}

As a result of this, I just see that the only points remove related with the last camera. Did I make something wrong, or is it the expected behavior?

fyi. @badai-nguyen, I guess you are the maintainer of this package or node.

Expected behavior

I expect to see something like that when I run the segmentation_pointcloud_fusion node: https://youtu.be/Ho4lkrmtuz4

To obtain this results, I make some basic changes in fusion_node.cpp and segmentation_pointcloud_fusion/node.cpp.

Actual behavior

I see something like that when I run the segmentation_pointcloud_fusion node: https://youtu.be/GaNBf3vUcOo

Steps to reproduce

I did not run whole Autoware, I just launch yolox and image_projection_based_fusion.

There are six cameras and one lidar topic in my bag file.

  1. ros2 launch autoware_tensorrt_yolox multiple_yolox.launch.xml
  • From this launch file, I just added the output/mask topic for each yolox node, because in default it looks mask topics is same for all nodes.
  1. ros2 launch autoware_image_projection_based_fusion segmentation_pointcloud_fusion.launch.xml
  2. ros2 bag play -r 0.015 rosbag/
@StepTurtle StepTurtle added the component:perception Advanced sensor data processing and environment understanding. (auto-assigned) label Aug 13, 2024
@vividf vividf assigned vividf and badai-nguyen and unassigned vividf Aug 16, 2024
@vividf
Copy link
Contributor

vividf commented Aug 30, 2024

Currently checking this

@vividf
Copy link
Contributor

vividf commented Sep 5, 2024

@StepTurtle
Thanks for your report and I think this is a bug.
Could you test with the PR #8769 and check whether this solves your issue?
Thanks!

@StepTurtle
Copy link
Contributor Author

@vividf

Thanks for your PR. Yes it solved my problem and it can work for a multiple cameras and masks. 👍🏽

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:perception Advanced sensor data processing and environment understanding. (auto-assigned)
Projects
None yet
Development

No branches or pull requests

3 participants