Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update occupancy grid map frame to gain longer range visibility in the intersection #2906

Closed
3 tasks done
YoshiRi opened this issue Feb 17, 2023 · 10 comments
Closed
3 tasks done
Labels
type:new-feature New functionalities or additions, feature requests.

Comments

@YoshiRi
Copy link
Contributor

YoshiRi commented Feb 17, 2023

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

Currently, we are using base_link frame to generate occupancy grid map.
The base_link is typically set at the center of the rear wheel, behind the vehicle rather than the LiDAR or driver. (See following Figure)

Therefore, the field of view of the occupancy grid map generated in the base_link is narrower than that of the driver or the sensor's viewpoint, for example, when turning right at an intersection.

image

Purpose

We need to change grid map estimation frame to achieve

  • wider range visibility
  • principled correctness

Possible approaches

I think there are two possible approaches:

name figure note
current image current setting
plan A image generate occupancy grid map in other frame
plan B image More faithful method to sensor visibility

planA should be easier to implement.
planB will accurately represent the visible range of the sensor.

Since we often use only top LiDAR to sense further objects, I think using planA and set its frame to top lidar sensor will be enough.

Definition of done

[TBD]

Should be confirmed in scenarios involving right turns at intersections.

@soblin
Copy link
Contributor

soblin commented Feb 17, 2023

In planning modules, the main customers for occupancy grid map are:

  • occlusion_spot -> it needs occlusion information to a good range
  • intersection -> currently this module is only interested in nearest occlusion on the upcoming lane
  • pullover/pullout -> it may need only local occupancy information

This proposal maybe especially useful for intersection module.

My remark is that the best plan depends on the sensor configuration. For example if one attached better solid state LiDAR in front of the vehicle its field of view may be better than the top lidar to specific direction.

@BonoloAWF BonoloAWF added the type:new-feature New functionalities or additions, feature requests. label Feb 17, 2023
@VRichardJP
Copy link
Contributor

Although it might be more tedious to implement, plan B looks like the only valid option to me: if a vehicle has multiple sensors at several places, it is because visibility is not the same everywhere. For small vehicles such as cars, the top lidar is indeed the one with the widest/farthest FOV, but when it comes to larger vehicles (minibus, shuttle, bus, truck, etc), no sensor can clearly see everything around.

@miursh
Copy link
Contributor

miursh commented Feb 21, 2023

I suppose plan B is more suitable for expressing sensor FoV.
Although, it would be quite complicated to implement and I believe there are several design points.
e.g. is isn't it difficult to determine whether out of sensor FoV or free space by only using limited FoV sensor, which both appears no return points.

@taikitanaka3
Copy link
Contributor

@soblin
I think generate grid map only at driver frame is also ok. I don't know module which requires grid map at baselink frame.

@YoshiRi
Copy link
Contributor Author

YoshiRi commented Feb 23, 2023

Now I agree with VRichardJP and miurush that the plan B should be the final solution.
But you know, we need a lot of changes and there are also concern about computational load.

So I just implemented the plan A for the instant solution.
@soblin could you check if this PR improves our planning scenarios?

@stale
Copy link

stale bot commented Apr 24, 2023

This pull request has been automatically marked as stale because it has not had recent activity.

@stale stale bot added the status:stale Inactive or outdated issues. (auto-assigned) label Apr 24, 2023
@idorobotics
Copy link

@YoshiRi what is the current status for this issue?

@stale stale bot removed the status:stale Inactive or outdated issues. (auto-assigned) label Oct 5, 2023
@YoshiRi
Copy link
Contributor Author

YoshiRi commented Oct 5, 2023

@idorobotics
Sorry, I am Pending this matter now due to other prioritized tasks.
The latest status is in DevelopmentAboutOccupancyGridMapFusion.pdf

remaining tasks

OGM fusion will be available after merging the following two PRs.

related PRs

PRs

  • Solution A: move scan origin
    • PR#2939 : enable to select gridmap origin frame
    • PR#3032 : add scan frame option and fix scan
  • Solution B: create ogm in each sensor
    • PR#3032 : Separate scan_frame and gridmap_origin
    • PR#3054 : Filter obstacle pointcloud by raw pointcloud
    • PR#3312 : Publish time synced raw pointcloud from sensing component (WIP)
  • Solution B: OGM Fusion Node
    • PR#3058 : Refactor OGM launcher
    • PR#3340 : Bug fix
    • PR#5993 : Add grid map fusion node
  • performance fix:
    • PR6865: enable downsample
    • PR#6857: use raw pc to define visibility

Copy link

stale bot commented Dec 5, 2023

This pull request has been automatically marked as stale because it has not had recent activity.

@YoshiRi
Copy link
Contributor Author

YoshiRi commented May 31, 2024

All features are merged and successfully tested.
Also see related discussion: https://github.com/orgs/autowarefoundation/discussions/4158#discussioncomment-8664198.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:new-feature New functionalities or additions, feature requests.
Projects
No open projects
7 participants