Skip to content
forked from gqding/SalFBNet

🔥 Pytorch implementation for a feedback saliency detection model (SalFBNet), IMAVIS 2022.

License

Notifications You must be signed in to change notification settings

acaglayan/SalFBNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SalFBNet

This repository includes Pytorch implementation for the following paper:

SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks, IMAVIS 2022.
Guanqun Ding, Nevrez Imamoglu, Ali Caglayan, Masahiro Murakawa, Ryosuke Nakamura
(Paper) (arXiv)

input

Getting Started

1. Installation

You can install the envs mannually by following commands:

git clone https://github.com/gqding/SalFBNet.git
conda create -n salfbnet python=3.8
conda activate salfbnet
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
pip install scikit-learn scipy tensorboard tqdm
pip install torchSummaryX

Alternativaly, you can install the envs from yml file. Before running the command, please revise the 'prefix' with your PC name.

conda env create -f environment.yml

2. Downloads

  • Our released SalFBNet models

    We released our pretrained SalFBNet models on Google Drive. The shared models initially trained on our Pseudo-Saliency Dataset, then fine tuned with SALICON and MIT1003 to be tested on MIT300 benchmark.

  • Our SalFBNet Pseudo Saliency Dataset

    We released our PseudoSaliency dataset on this ABCI Datasets page;

    We also show how to use our dataset for model training on this Usage page.

  • Our testing saliency results on public datasets

    You can download our testing saliency resutls from this Google Drive.

3. Run

After downloading the pretrained models, you can run the script by

sh run_test.sh

Alternativaly, you can modify the script for testing of different image folder and models (SalFBNet_Res18 or SalFBNet_Res18Fixed).

python main_test.py --model=pretrained_models/FBNet_Res18Fixed_best_model.pth \
--save_fold=./results_Res18Fixed/ \
--backbone=Res18Fixed \
--test_path=Datasets/PseudoSaliency/Images/ECSSD/images/

You can find results under the 'results_*' folder.

4. Datasets

Dataset #Image #Training #Val. #Testing Size URL Paper
SALICON 20,000 10,000 5,000 5,000 ~4GB download link paper
MIT300 300 - - 300 ~44.4MB download link paper
MIT1003 1003 900* 103* - ~178.7MB download link paper
PASCAL-S 850 - - 850 ~108.3MB download link paper
DUT-OMRON 5,168 - - 5,168 ~151.8MB download link paper
TORONTO 120 - - 120 ~12.3MB download link paper
Pseudo-Saliency (Ours) 176,880 150,000 26,880 - ~24.2GB download link paper

Performance Evaluation

1. Visulization Results

input

2. Testing Performance on DUT-OMRON, PASCAL-S, and TORONTO

input

3. Testing Performance on SALICON

input

4. Testing Performance on MIT300

input

Please check the leaderboard of MIT300 for more details.

Our SalFBNet model ranked in Second best with sAUC, CC, and SIM metrics (Screenshot from December 10, 2021).

input

5. Efficiency Comparison

input

Evaluation

We use the metric implementation from MIT Saliency Benchmark for performance evaluation.

Citation

Please cite the following papers if you use our data or codes in your research.

@article{ding2022salfbnet,
  title={SalFBNet: Learning pseudo-saliency distribution via feedback convolutional networks},
  author={Ding, Guanqun and {\.I}mamo{\u{g}}lu, Nevrez and Caglayan, Ali and Murakawa, Masahiro and Nakamura, Ryosuke},
  journal={Image and Vision Computing},
  pages={104395},
  year={2022},
  publisher={Elsevier}
}

@inproceedings{ding2021fbnet,
  title={FBNet: FeedBack-Recursive CNN for Saliency Detection},
  author={Ding, Guanqun and {\.I}mamo{\u{g}}lu, Nevrez and Caglayan, Ali and Murakawa, Masahiro and Nakamura, Ryosuke},
  booktitle={2021 17th International Conference on Machine Vision and Applications (MVA)},
  pages={1--5},
  year={2021},
  organization={IEEE}
}

Acknowledgement

About

🔥 Pytorch implementation for a feedback saliency detection model (SalFBNet), IMAVIS 2022.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.9%
  • Shell 1.1%