source code for our CVPR 2020 paper “Learning Selective Self-Mutual Attention for RGB-D Saliency Detection” by Nian Liu, Ni Zhang and Junwei Han.
created by Ni Zhang, email: nnizhang.1995@gmail.com
- pytorch 0.4.1
- torchvision 0.1.8
- download the RBD-D datasets [baidu pan fetch code: chdz | Google drive] and pretrained VGG model [baidu pan fetch code: dyt4 | Google drive], then put them in the ./RGBdDataset_processed directory and ./pretrained_model directory, respectively.
- run
python generate_list.py
to generate the image lists. - modify codes in the parameter.py
- start to train with
python train.py
- download our models [baidu pan fetch code: ly9k | Google drive] and put them in the ./models directory. After downloading, you can find two models (S2MA.pth and S2MA_DUT.pth). S2MA_DUT.pth is used for testing on the DUT-RGBD dataset and S2MA.pth is used for testing on the rest datasets.
- modify codes in the parameter.py
- start to test with
python test.py
and the saliency maps will be generated in the ./output directory.
Our saliency maps can be download from [baidu pan fetch code: frzb | Google drive].
We use some opensource codes from Non-local_pytorch, denseASPP. Thanks for the authors.
If you think our work is helpful, please cite
@inproceedings{liu2020S2MA,
title={Learning Selective Self-Mutual Attention for RGB-D Saliency Detection},
author={Liu, Nian and Zhang, Ni and Han, Junwei},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={13756--13765},
year={2020}
}