Revisiting Salient Object Detection: Simultaneous Detection, Ranking, and Subitizing of Multiple Salient Objects
This repository contains code for the paper
Revisiting Salient Object Detection: Simultaneous Detection, Ranking, and Subitizing of Multiple Salient Objects,
Presented at CVPR 2018
If you find the code useful for your research, please consider citing our work:
@InProceedings{Islam_2018_CVPR,
author = {Amirul Islam, Md and Kalash, Mahmoud and Bruce, Neil D. B.},
title = {Revisiting Salient Object Detection: Simultaneous Detection, Ranking, and Subitizing of Multiple Salient Objects},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
Anyone can freely use our codes for what-so-ever purpose they want to use. Here we give a detailed instruction to set them up and use for different applications.
The codes can be downloaded using the following command:
git clone --recursive https://github.com/islamamirul/rsdnet.git
cd rsdnet
Setup:
-
Download and compile caffe-rsdnet which is a modified version of deeplab-public-ver2
-
Download the PASCAL-S dataset from here and put them under ./data/
-
Run the following script to generate stack of saliency masks
./scripts/stack_generation/generate_saliency_mask_stack.m
-
Download the pretrained-weights (init.caffemodel) of DeepLabv2 from here and put it under ./models/trained_weights/
You can download the trained model which is reported in our paper at Dropbox and put them under ./models/trained_weights/
Modify the caffe root directory and run the following command to start training:
sh train_rsdnet.sh
Modify the caffe root directory in ./scipts/inference/test_rsdnet.py and run the following command:
sh test_rsdnet.sh
The results of multiple salienct object detection (extended to salient object ranking) on PASCAL-S dataset can be found at Dropbox
Salient Object Ranking (SOR): Please run the following script to generate overall SOR score for RSDNet
./scripts/eval/SOR/SOR.m
To generate the detection scores (F-measure, AUC, and MAE) please run the corresponding scripts under .scripts/eval/..