This repository contains the source code of the following paper: VLFATRollout: Fully Transformer-based Classifier for Retinal OCT Volumes, Marzieh Oghbaie, Teresa Araujo, Ursula Schmidt-Erfurth, Hrvoje Bogunovic
The proposed network deploys Transformers for volume classification that is able to handle variable volume resolutions both at development and inference time.
The main models are available at model_zoo/feature_extrc/models.py
.
Please check INSTALL.md for installation instructions.
For OLIVES dataset, the list of samples should be provided in a .csv
file under dataset
to annotation_path_test
field. The file should at least includes sample_path
,FileSetId
,label
,label_int
,n_frames
.
On Duke dataset, however, give the directory of the samples arranged like the following to the dataloader is sufficient: subset/class
.
python main/Smain.py --config_path config/YML_files/VLFATRollout.yaml
- Baseline experiments:
- To run the baseline experiments, please refer to the following repo: https://github.com/marziehoghbaie/VLFAT
- Simple Test with confusion matrix: set the
train: false
andallow_size_mismatch: false
undertrain_config
in the corresponding config file.
python main/Smain.py --config_path config/YML_files/FATRollOut.yaml
This repository is built using the timm library, Pytorch and Meta Research repositories.
This project is released under the MIT license. Please see the LICENSE file for more information.