Release of the calibration tools repository for autonomous driving #2727
knzo25
started this conversation in
Show and tell
Replies: 1 comment
-
Could you check this link: Map-based lidar-lidar calibration: Link It looks like unavailable. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are happy to announce the release of our calibration tools repository for autonomous driving. Here, we implement several tools to calibrate different components like control and localization, but are currently focusing more on the development of sensor-related calibration tools, which although designed for use cases in autonomous driving, are general enough and can be also used in other fields.
In contrast with most open-source available methods that are usually released as isolated proofs-of-concept or evidence of research-related projects, ours is a stable, production-grade, and integrated set of tools, with plans for long-term support. Similarly, while there are several production-level sets of tools we distinguish ourselves by releasing ours open-source as is the spirit of the AWF.
Following is a list of the implemented sensor calibration tools, their requirements, process, and limitations:
Manual extrinsic calibration: Link
Intended as a proof-of-concept of our calibration API and as a baseline to which to compare automatic calibration tools, this method allows us to directly modify the values of the TF tree with a rviz view to evaluate the tfs and calibration.
Map-based lidar-lidar calibration: Link
As lidars are key sensors in autonomous driving, to aggregate all the sensors' information in one frame, their calibration is necessary. We do this via the standard ICP algorithm and leveraging the use of an external point cloud map to compensate for the sparsity of lidars during the registration process.
The calibration with this method is automatic but requires that the vehicle is localized (to better use the map) and the selection of a scene/environment appropriate for calibration (with enough natural features for the registration)
Interactive camera-lidar calibration: Link
Camera extrinsic calibration can be performed directly between the vehicle and the camera, but we believe that calibrating the camera with respect to a lidar provides a more consistent calibration.
This method requires no additional infrastructure and can be used with generic data where the vehicle is either stopped or moving slowly. To perform the calibration the user must select corresponding points in both sensors (camera and lidar), with which the calibration is performed automatically by minimizing the reprojection error using PNP algorithms.
The interactive calibration tool provides a UI to not only visualize the calibration but also provides control over the calibration process including steps such as outlier removal.
The only requirement to use this method is to perform the sensors' intrinsic calibration before attempting to calibrate the extrinsics.
Automatic tag-based camera-lidar calibration: Link
The biggest problem of the "Interactive camera-lidar calibration" is that the point selection still must be performed manually. To address this limitation, this tool detects known tags in both sensors, whose corners become the calibration points for reprojection error minimization.
The calibration process requires the availability of special tags, and consists of moving said tag (or tags) to different positions on the shared field of view to collect data. Although in principle this method requires that the lidar sensors have access to the intensity channel (to recognize the tags payload) it is possible to circumvent this requirement via parameterization (we have only tested this feature with Pandar QT lidars).
Camera-lidar based camera intrinsic calibration: Link
Camera intrinsic calibration is a well-established procedure carried out by moving chessboard-like planar objects in front of the camera.
Since we use tags for camera-lidar calibration, we can avoid a calibration procedure and the need for an additional calibration board by reutilizing the calibration points from the "tag-based camera-lidar calibration" to calibrate the camera intrinsics in addition to its extrinsics, with similar results to the established methods.
Ground-plane base-lidar calibration (z, roll, pitch): Link
Although we previously mentioned lidar-lidar calibration methods calibrate pairs of lidars, it is also necessary to calibrate at least one lidar with respect to the vehicle itself (usually the base link).
Towards implementing a fully automatic method for this calibration, we first implemented a method that calibrates only the z, roll, and pitch values from the tf. We require that the surface near the car forms a plane and the method finds this plane and positions the vehicle frame over it. The position of the tf on the plane (x and y values) and its orientation (yaw value) are still done manually.
Having released our suite for calibration we would love to hear the opinions from the community!. In particular, we would appreciate it if you could test our tools, compare them with your pipeline and share your thoughts and feedback with us. Simmilarly, if you have comment, question, or any idea or feature you would like to see, whether is related to the previously stated components (sensor, control, and localization) or any other autonomous driving related area, please let us know !
For our part, we have scheduled the development of a full base-lidar calibration method, the development of lidar-lidar and camera-lidar methods that require less or no infrastructure, and the development of tools to estimate the status of calibration during normal autonomous driving.
Hope you like our set of tools!
Beta Was this translation helpful? Give feedback.
All reactions