Office source code of paper Monocular 360 Depth Estimation via Spherical Fully-Connected CRFs, arXiv, Project page
Environments
- python 3.9
- Pytorch 1.13.0, CUDA 11.7, torchvision 0.14.0
- Platform NVIDIA 3090
Install requirements
pip install -r requirements.txt
Please download the preferred datasets, i.e., Matterport3D, Stanford2D3D. For Matterport3D, please preprocess it following (UniFuse/Matterport3D/README.md).
python train.py --config ./configs/train_matterport3d/b5_matterpot3d.yaml
python train.py --config ./configs/train_stanford2d3d/b5_stanford2d3d.yaml
It is similar for other datasets, such as Structured3D dataset.
The pre-trained models of CRF360D for 2 datasets are available, Matterport3D, and Stanford2D3D.
python evaluate.py --config ./configs/test_matterport3d/panocrf_b5.yaml
Please cite our paper if you find our work useful in your research.
@article{cao2024crf360d,
title={CRF360D: Monocular 360 Depth Estimation via Spherical Fully-Connected CRFs},
author={Cao, Zidong and Wang, Lin},
journal={arXiv preprint arXiv:2405.11564},
year={2024}
}