This repository is the code used to generate the results in the paper [Optimal visual search based on a model of target detectability in natural images]. which was presented at NeurIPS 2020
To install requirements:
pip3 install -r requirements.txt
The presented model in the paper outputs detectability as a function of eccentricity for any given image.
To see how to calculate the detectability of one object on a set of backgrounds, run:
python get_ddash.py -h
This will use the object file given as the object_file parameter in the data/overlays fodler, and the backgrounds in the data/test folder.
Sample background patches can be found in datasets/test (images taken from the texture dataset ETHZ Synthesizability Dataset. The model outputs the detectability-eccentricity graph of the input image and a .csv file with the image name and detectability fall-off rate.
To see how to output the number of fixations and scanpath of any given textued image with the target pasted at an unknown location, run this command:
python visual_search.py -h
The sample input csv files provided (default parameters) in the files folder are from datasets/test/simul_ddash_params.csv. The model outputs two csv files containing number of fixations and scanpth.
Our model achieves the following performance on the 18 sample backgrounds in figure 3 of the main paper:
Model name | MSE | SE |
---|---|---|
Alexnet + Log. Res. | 0.0978 | 0.0015 |