GravityNet official repository
by Eng. Ciro Russo, PhD
👨💻 LinkedIn
📑 Google Scholar
📚 Research Gate
💻 Kaggle
🏢 University of Cassino and Lazio Meridionale
👨🏫 Prof. Claudio Marrocco (Google Scholar)
👨🏫 Prof. Alessandro Bria (LinkedIn)
👨💻 Giulio Russo (LinkedIn)
👨💻 Yusuf B. Tanrıverdi (LinkedIn)
GravityNet is novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be “attracted” by the lesions.
Paper: GravityNet for end-to-end small lesion detection
@article{Russo_Bria_Marrocco_2024, <br>
title = {GravityNet for end-to-end small lesion detection}, <br>
ISSN={0933-3657}, <br>
DOI={10.1016/j.artmed.2024.102842}, <br>
journal={Artificial Intelligence in Medicine}, <br>
author={Russo, Ciro and Bria, Alessandro and Marrocco, Claudio}, <br>
year={2024}, <br>
month=mar, <br>
pages={102842} <br>
}
ArXiv: https://arxiv.org/abs/2309.12876
This project is licensed.
Please review the LICENSE file for more information.
GravityNet is available on PyPI
pip install gravitynet
Can be imported as:
import gravitynet
This framework uses parameters-parsing, so each new parameter must be added paying attention to the reference section (for details see parameters).
The definition of these parameters is essential for the experiment_ID to save the results and avoid overwriting.
NOTE: for Windows users use a separator equal to ' _ ', while for Linux users the default separator is ' | '.
For details about the requirements
!pip install -r requirements.txt
Before starting the experiment, it is necessary to define the working paths:
--dataset_path -> path where the dataset is located
--experiments_path -> path where to save the result of the experiment
NOTE: the dataset_path is concatenated to the dataset name (see parameters)
For details about the dataset-structure
For details about the experiments-structure
The Class Dataset is defined according to the structure of the dataset (see dataset-structure).
NOTE: run Dataset-Statistics.py (script-dataset) to save the dataset statistics.
SCRIPT-DATASET | DESCRIPTION |
---|---|
Dataset-Statistics.py | Save dataset statistics |
To split the data into the train, validation, and test subsets, the framework uses a split file defined in the dataset folder.
The splits used in the experiments are reported in datasets.
DATASET | SMALL LESION | SPLIT | REFERENCE |
---|---|---|---|
INbreast | microcalcifications | INbreast split | INbreast reference |
E-ophtha-MA | microaneurysms | E-ophtha-MA split | E-ophtha-MA reference |
Cervix93 | nuclei | Cervix93 split | Cervix93 reference |
All information about the dataset are reported in the statistics of the corresponding datasets.
DATASET | STATISTICS |
---|---|
INbreast | INbreast statistics |
E-ophtha-MA | E-ophtha-MA statistics |
Cervix93 | Cervix93 statistics |
Transformations on each sample in the dataset are defined by a Class.
The framework uses a collate function to define the transformations to be applied, depending on the normalization type: none, min-max, and std.
The transformations to be applied vary depending on the application and the type of data used, to this end, we provide basic transformations.
Augmentation transformations can be applied to the train dataset, as: Horizontal, Vertical Flipping
To see the gravity-points configuration and the hooking process in script-anchors are provided the codes
SCRIPT-ANCHORS | DESCRIPTION |
---|---|
Gravity-Points-Configuration.py | Save gravity-points configuration |
Gravity-Points-Hooking.py | Save gravity-points hooking process |
GravityNet is a one-stage end-to-end detector composed of a backbone network and two specific subnetworks.
The backbone is a convolutional network and plays the role of feature extractor.
The first subnet performs convolutional object classification on the backbone's output.
The second subnet performs convolutional gravity-points regression.
The available backbone:
ResNet | ResNeXt | DenseNet | EfficientNet | EfficientNetV2 | SwinTransformer |
---|---|---|---|---|---|
ResNet-18 | ResNeXt-50_32x4d | DenseNet-121 | EfficientNet-B0 | EfficientNetV2-S | Swin-T |
ResNet-34 | ResNeXt-101_32x8d | DenseNet-161 | EfficientNet-B1 | EfficientNetV2-M | Swin-S |
ResNet-50 | ResNeXt-101_64x4d | DenseNet-169 | EfficientNet-B2 | EfficientNetV2-L | Swin-B |
ResNet-101 | DenseNet-201 | EfficientNet-B3 | |||
ResNet-152 | EfficientNet-B4 | ||||
EfficientNet-B5 | |||||
EfficientNet-B6 | |||||
EfficientNet-B7 |
The available execution mode:
EXECUTION MODE | DESCRIPTION |
---|---|
train | train model |
test | test model |
train_test | train and test model |
The available script execution mode
SCRIPT EXECUTION MODE | DESCRIPTION | DOCUMENTATION |
---|---|---|
script_anchors | script-anchors execution mode | script-anchors documentation |
script_dataset | script-dataset execution mode | script-dataset documentation |
explainability | explainability mode | script-explainability documentation |
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -u GravityNet.py train_test
--dataset_path = "path to dataset main folder"
--experiments_path = "path to experiments result"
--images_extension = [png, tif]
--images_masks_extension = [png, none]
--annotations_extension = csv
--dataset = "Dataset Name"
--do_dataset_augmentation
--num_channels = [1, 3]
--small_lesion = [microcalcifications, microaneurysms, nuclei]
--split = [default, 1-fold, 2-fold]
--rescale = [0,5, 1.0]
--norm = [none, min-max, std]
--epochs = 100
--lr = 1e-04
--bs = 8
--backbone = ResNet-152
--pretrained
--config = grid-15
--hook = 15
--eval = distance10
--FP_images = [all, normals]
--score_threshold = 0.05
--do_NMS
--NMS_box_radius = "Lesion Radius"
--do_output_gravity
--num_images = "Num Images"