This repository provides the code for the paper:
Richard Droste, Jianbo Jiao and J. Alison Noble. Unified Image and Video Saliency Modeling. In: ECCV (2020).
If you use UNISAL, please cite the following BibTeX entry:
@inproceedings{drostejiao2020,
author = {{Droste}, Richard and {Jiao}, Jianbo and {Noble}, J. Alison},
title = "{Unified Image and Video Saliency Modeling}",
booktitle = {Proceedings of the 16th European Conference on Computer Vision (ECCV)},
year = {2020},
}
- https://arxiv.org/abs/2003.05477
- Official Benchmark
- Supplementary Video
- ECCV Full Spotlight Presentation (10min)
- ECCV Short Presentation (90s)
Comparison of UNISAL with current state-of-the-art methods on the DHF1K Benchmark
UNISAL method overview
To install the dependencies into a new conda environment, simpy run:
conda env create -f environment.yml
source activate unisal
Alternatively, you can install them manually:
conda create --name unisal
source activate unisal
conda install pytorch=1.0 torchvision cudatoolkit=9.2 -c pytorch
conda install opencv=3.4 -c conda-forge
conda install scipy
pip install fire==0.2 tensorboardX==1.6
We provide demo code that generates saliency predictions for example files from the DHF1K, Hollywood-2, UCF-Sports, SALICON and MIT1003 datasets.
The predictions are generated with the pretrained weights in training_runs/pretrained_unisal.
Follow these steps:
-
Download the following example files and extract the contents into the
examples
of the repository folder:
Google Drive: .zip file or .tar.gz file
Baidu Pan: .zip file (password: mp3y) or .tar.gz file (password: ixdd) -
Generate the demo predictions for the examples by running
python run.py predict_examples
My own code.
python run.py predict_examples_wk
The predictions are written to saliency
sub-directories to the examples folders.
But I saved my results in <my_bigdata_folder>/WF/Preds/<Dataset>/UNISAL/
to ease evaluation.
The code for training and scoring the model and to generate test set predictions is included.
For training and test set predictions, the relevant datasets need to be downloaded.
-
DHF1K, Hollywood-2 and UCF Sports:
https://github.com/wenguanwang/DHF1K -
SALICON:
http://salicon.net/challenge-2017/ -
MIT1003:
http://people.csail.mit.edu/tjudd/WherePeopleLook/index.html
Specify the paths of the downloaded datasets with the environment variables
DHF1K_DATA_DIR
,
SALICON_DATA_DIR
,
HOLLYWOOD_DATA_DIR
,
UCFSPORTS_DATA_DIR
MIT300_DATA_DIR
,
MIT1003_DATA_DIR
.
To train the model, simpy run:
python run.py train
By default, this function computes the scores of the DHF1K and SALICON validation sets
and the Hollywood-2 and UCF Sports test sets after the training is finished.
The training data and scores are saved in the training_runs
folder.
Alternatively, the training path can be overwritten with the environment variable TRAIN_DIR
.
Any trained model can be scored with:
python run.py score_model --train_id <name of training folder>
If --train_id
is omitted, the provided pre-trained model is scored.
The scores are saved in the corresponding training folder.
To generate predictions for the test set of each datasets follow these steps:
-
Specify the directory where the predictions should be saved with the environment variable
PRED_DIR
. -
Generate the predictions by running
python run.py generate_predictions --train_id <name of training folder>
If --train_id
is omitted, predictions of the provided pretrained model are generated.
Run python run.py running_time