Code for reproducing the results in the following paper:
Learning Single-Image Depth from Videos using Quality Assessment Networks
Weifeng Chen, Shengyi Qian, Jia Deng
Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Please check the project site for more details.
-
The code is written in
python 2.7.13
, usingpytorch 0.2.0_4
. Please make sure that you install the correct pytorch version as later versions may cause the code to break. -
Clone this repo.
git clone git@github.com:princeton-vl/YouTube3D.git
- Download data_model.tar.gz into path
YouTube3D
, then untar:
cd YouTube3D
tar -xzvf data_model.tar.gz
- Download and unpack the images from Depth in the Wild dataset. Edit
DIW_test.csv
underYouTube3D/data
so that all the image paths are absolute paths.
To evaluate the pre-trained model EncDecResNet
trained on ImageNet + ReDWeb + DIW + YouTube3D
on the DIW dataset, run the following command:
cd YouTube3D/src
python test.py -t DIW_test.csv -model exp/YTmixReD_dadlr1e-4_DIW_ReDWebNet_1e-6_bs4/models/model_iter_753000.bin
In case you want to get the qualitative outputs, append a -vis
flag and the qualitative outputs will be in the folder visualize
:
mkdir visualize
python test.py -t DIW_test.csv -model exp/YTmixReD_dadlr1e-4_DIW_ReDWebNet_1e-6_bs4/models/model_iter_753000.bin -vis
To evaluate the pre-trained model HourglassNetwork
trained on NYU + DIW + YouTube3D
on the DIW dataset, run the following command:
python test.py -t DIW_test.csv -model exp/Hourglass/models/best_model_iter_852000.bin
Please send any questions or comments to Weifeng Chen at wfchen@umich.edu.