Code for RA-L 2022 paper RINet: Efficient 3D Lidar-Based Place Recognition Using Rotation Invariant Neural Network
@ARTICLE{9712221,
author={Li, Lin and Kong, Xin and Zhao, Xiangrui and Huang, Tianxin and Li, Wanlong and Wen, Feng and Zhang, Hongbo and Liu, Yong},
journal={IEEE Robotics and Automation Letters},
title={{RINet: Efficient 3D Lidar-Based Place Recognition Using Rotation Invariant Neural Network}},
year={2022},
volume={7},
number={2},
pages={4321-4328},
doi={10.1109/LRA.2022.3150499}}
conda create -n rinet python=3.7
conda activate rinet
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
conda install tqdm scikit-learn matplotlib tensorboard
You can directly use the descriptors we provide, or you can generate descriptors by yourself according to the descriptions below:
Requirements: OpenCV, PCL and yaml-cpp.
cd gen_desc && mkdir build && cd build && cmake .. && make -j4
If the compilation is successful, then execute the following command to generate the descriptors (All descriptors will be saved to a single binary file "output_file.bin"):
./kitti_gen cloud_folder label_folder output_file.bin
data
|---desc_kitti
| |---00.npy
| |---01.npy
| |---....
|---gt_kitti
| |---00.npz
| |---01.npz
| |---...
|---pose_kitti
| |---00.txt
| |---02.txt
| |--...
|---pairs_kitti
| |...
You can download the provided preprocessed data.
python train.py --seq='00'
Pretrained models can be downloaded from this link.
python eval.py
We provide the raw data of the tables and curves in the paper, including compared methods DiSCO and Locus. Raw data for other methods can be found in this repository.