-
Notifications
You must be signed in to change notification settings - Fork 673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update occupancy grid map frame to gain longer range visibility in the intersection #2906
Comments
In planning modules, the main customers for occupancy grid map are:
This proposal maybe especially useful for intersection module. My remark is that the best plan depends on the sensor configuration. For example if one attached better solid state LiDAR in front of the vehicle its field of view may be better than the top lidar to specific direction. |
Although it might be more tedious to implement, plan B looks like the only valid option to me: if a vehicle has multiple sensors at several places, it is because visibility is not the same everywhere. For small vehicles such as cars, the top lidar is indeed the one with the widest/farthest FOV, but when it comes to larger vehicles (minibus, shuttle, bus, truck, etc), no sensor can clearly see everything around. |
I suppose plan B is more suitable for expressing sensor FoV. |
@soblin |
Now I agree with VRichardJP and miurush that the plan B should be the final solution. So I just implemented the plan A for the instant solution. |
This pull request has been automatically marked as stale because it has not had recent activity. |
@YoshiRi what is the current status for this issue? |
@idorobotics remaining tasksOGM fusion will be available after merging the following two PRs. related PRsPRs
|
This pull request has been automatically marked as stale because it has not had recent activity. |
All features are merged and successfully tested. |
Checklist
Description
Currently, we are using
base_link
frame to generate occupancy grid map.The base_link is typically set at the center of the rear wheel, behind the vehicle rather than the LiDAR or driver. (See following Figure)
Therefore, the field of view of the occupancy grid map generated in the
base_link
is narrower than that of the driver or the sensor's viewpoint, for example, when turning right at an intersection.Purpose
We need to change grid map estimation frame to achieve
Possible approaches
I think there are two possible approaches:
planA should be easier to implement.
planB will accurately represent the visible range of the sensor.
Since we often use only top LiDAR to sense further objects, I think using planA and set its frame to top lidar sensor will be enough.
Definition of done
[TBD]
Should be confirmed in scenarios involving right turns at intersections.
The text was updated successfully, but these errors were encountered: