Yue Chen¹, Xingyu Chen¹†⚑, Yicen Li²
†Corresponding Author, ⚑Project Lead, ¹Xi'an Jiaotong University, ²McMaster University
This repository is an official implementation of PROCA using pytorch.
- Python 3
- Pytorch and torchvision (https://pytorch.org/)
- TensorboardX
- Clone this repo:
git clone https://github.com/rover-xingyu/PROCA.git
cd PROCA
- The dataset we use in this paper is from CMU-Seasons Dataset.
- In order to verify the generalization ability of PROCA, we sample images from the urban part as the training set, and evaluate on the suburban and park parts.
- We label the images with occlusion and without occlusion depending on if there are dynamic objects in the images. You can find our labels in
dataset/CMU_Seasons_Occlusions.json
- The dataset is organized as follows:
├── CMU_urban
│ ├── trainA // images with appearance A without occlusion
│ │ ├── img_00119_c1_1303398474779487us_rect.jpg
│ │ ├── ...
│ ├── trainAO // images with appearance A with occlusion
│ │ ├── img_00130_c0_1303398475779409us_rect.jpg
│ │ ├── ...
│ ├── ...
│ │ trainL // images with appearance L without occlusion
│ │ ├── img_00660_c0_1311874734447600us_rect.jpg
│ │ ├── ...
│ ├── trainLO // images with appearance L with occlusion
│ │ ├── img_00617_c1_1311874730447615us_rect.jpg
│ │ ├── ...
If you find this project useful for your research, please use the following BibTeX entry.
@inproceedings{chen2023place,
title={Place Recognition under Occlusion and Changing Appearance via Disentangled Representations},
author={Chen, Yue and Chen, Xingyu and Li, Yicen},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={1882--1888},
year={2023},
organization={IEEE}
}
Our code is based on the awesome pytorch implementation of Diverse Image-to-Image Translation via Disentangled Representations (DRIT++ and MDMM). We appreciate all the contributors.