The Pytorch code for our following paper
PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation, 3DV 2021 (pdf)
Hualie Jiang, Laiyan Ding, Junjie Hu and Rui Huang
Install pytorch first by running
conda install pytorch=1.5.1 torchvision=0.6.1 cuda101 -c pytorch
Then install other requirements
pip install -r requirements.txt
Please download preprocessed (sampled in 5 frames) NYU-Depth-V2 dataset by Junjie Hu and extract it.
Extract the superpixels and line segments by excuting
python extract_superpixel.py --data_path $DATA_PATH
python extract_lineseg.py --data_path $DATA_PATH
run depth_prediction_example.ipynb with jupyter notebook
python train.py --data_path $DATA_PATH --model_name plnet_3f --frame_ids 0 -2 2
Using the pretrained model from 3-frames setting gives better results.
python train.py --data_path $DATA_PATH --model_name plnet_5f --load_weights_folder models/plnet_3f --frame_ids 0 -4 -2 2 4
The pretrained models of our paper is available on Google Drive.
python evaluate_nyu_depth.py --data_path $DATA_PATH --load_weights_folder $MODEL_PATH
python evaluate_scannet_depth.py --data_path $DATA_PATH --load_weights_folder $MODEL_PATH
python evaluate_scannet_pose.py --data_path $DATA_PATH --load_weights_folder $MODEL_PATH --frame_ids 0 1
Note: to evaluate on ScanNet, one has to download the preprocessed data by P^2Net.
The project borrows codes from Monodepth2 and P^2Net. Many thanks to their authors.
Please cite our papers if you find our work useful in your research.
@inproceedings{jiang2021plnet,
title={PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation},
author={Jiang, Hualie and Ding, Laiyan and Hu, Junjie and Huang, Rui},
booktitle={In IEEE International Conference on 3D Vision (3DV)},
year={2021}
}