This is the official code of our paper "Salient Object Detection Combining a Self-Attention Module and a Feature Pyramid Network".
Download the following datasets and unzip them into data
folder.
-
DUTS dataset. The .lst file for training is
data/DUTS/DUTS-TR/train_pair.lst
.
Download the following pre-trained models GoogleDrive | BaiduYun (pwd: 27p5) into dataset/pretrained
folder.
-
Set the
--train_root
and--train_list
path intrain.sh
correctly. -
We demo using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 24 epoches, which is divided by 10 after 15 epochs.
./train.sh
- After training the result model will be stored under
results/run-*
folder.
python main.py --mode='test' --model='results/run-*/models/final.pth' --test_fold='results/run-*-sal-e' --sal_mode='e'
All results saliency maps will be stored under results/run-*
folders in .png formats.
https://github.com/NathanUA/Binary-Segmentation-Evaluation-Tool. The Code was used for evaluation in CVPR 2019 paper 'BASNet: Boundary-Aware Salient Object Detection code', Xuebin Qin, Zichen Zhang, Chenyang Huang, Chao Gao, Masood Dehghan and Martin Jagersand.
To cite this code for publications - please use:
@article{ren2020salient,
title={Salient Object Detection Combining a Self-attention Module and a Feature Pyramid Network},
author={Ren, Guangyu and Dai, Tianhong and Barmpoutis, Panagiotis and Stathaki, Tania},
journal={Electronics},
volume={9},
number={10},
pages={1702},
year={2020},
publisher={Multidisciplinary Digital Publishing Institute}
}