diff --git a/paper/fig2.pdf b/paper/fig2.pdf index 72ed865..f40eaeb 100644 Binary files a/paper/fig2.pdf and b/paper/fig2.pdf differ diff --git a/paper/fig2.svg b/paper/fig2.svg deleted file mode 100644 index aa9b84a..0000000 --- a/paper/fig2.svg +++ /dev/null @@ -1,196 +0,0 @@ - - - -a.c.b.d.Gradeablex Predictied Quality Decision Threshold- diff --git a/paper/paper.md b/paper/paper.md index fb4cf80..405dd52 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -45,10 +45,10 @@ The Fundus Image Toolbox has been developed to address this need within the medi # Tools The main functionalities of the Fundus Image Toolbox are: -- Quality prediction (\autoref{fig:example}a.). We trained an ensemble of ResNets and EfficientNets on the combined DeepDRiD and DrimDB datasets [@deepdrid;@drimdb] to predict the gradeability of fundus images. Both datasets are publicly available. The model ensemble achieved an accuracy of 0.78 and an area under the receiver operating characteristic curve of 0.84 on a DeepDRiD test split and 1.0 and 1.0 on a DrimDB test split. -- Fovea and optic disc localization (\autoref{fig:example}b.). Prediction of fovea and optic disc center coordinates using a multi-task EfficientNet model. We trained the model on the combined ADAM, REFUGE and IDRID datasets [@adam;@refuge;@idrid], which are publicly available. On our test split, the model achieved a mean distance to the fovea and optic disc targets of 0.88 % of the image size. This corresponds to a mean distance of 3,08 pixels in the 350 x 350 pixel images used for training and testing. -- Vessel segmentation (\autoref{fig:example}c.). Segmentation of blood vessels in a fundus image using an ensemble of FR-U-Nets. The ensemble achieved an average Dice score of 0.887 on the test split of the FIVES dataset [@koehler2024]. -- Registration (\autoref{fig:example}d.). Alignment of a fundus photograph to another fundus photograph of the same eye using SuperRetina: A keypoint-based deep learning model that produced registrations of at least acceptable quality in 98.5 % of the cases on the test split of the FIRE dataset [@liu2022]. +- Quality prediction (\autoref{fig:example}a). We trained an ensemble of ResNets and EfficientNets on the combined DeepDRiD and DrimDB datasets [@deepdrid;@drimdb] to predict the gradeability of fundus images. Both datasets are publicly available. The model ensemble achieved an accuracy of 0.78 and an area under the receiver operating characteristic curve of 0.84 on a DeepDRiD test split and 1.0 and 1.0 on a DrimDB test split. +- Fovea and optic disc localization (\autoref{fig:example}b). Prediction of fovea and optic disc center coordinates using a multi-task EfficientNet model. We trained the model on the combined ADAM, REFUGE and IDRID datasets [@adam;@refuge;@idrid], which are publicly available. On our test split, the model achieved a mean distance to the fovea and optic disc targets of 0.88 % of the image size. This corresponds to a mean distance of 3,08 pixels in the 350 x 350 pixel images used for training and testing. +- Vessel segmentation (\autoref{fig:example}c). Segmentation of blood vessels in a fundus image using an ensemble of FR-U-Nets. The ensemble achieved an average Dice score of 0.887 on the test split of the FIVES dataset [@koehler2024]. +- Registration (\autoref{fig:example}.). Alignment of a fundus photograph to another fundus photograph of the same eye using SuperRetina: A keypoint-based deep learning model that produced registrations of at least acceptable quality in 98.5 % of the cases on the test split of the FIRE dataset [@liu2022]. - Circle crop. Fastly center fundus images and crop to a circle [@fu2019].