Pytorch implementation of DCRA-Net presented for dynamic fetal cardiac MRI reconstruction in
DCRA-Net: Attention-Enabled Reconstruction Model for Dynamic Fetal Cardiac MRI
Denis Prokopenko1, David F.A. Lloyd1,2, Amedeo Chiribiri1, Daniel Rueckert3,4, Joseph V. Hajnal1
1King’s College London,2Evelina London Children’s Hospital, 3Imperial College London, 4Technical University of Munich
Dynamic Cardiac Reconstruction Attention Network (DCRA-Net) - a model that reconstructs the dynamics of the fetal heart from highly accelerated free-running (non-gated) MRI acquisitions by taking advantage of attention mechanisms in spatial and temporal domains and temporal frequency representation of the data.
Clone repository and prepare the environment.
git clone https://github.com/denproc/DCRA-Net.git
cd DCRA-Net
python3 -m venv ./venv
source ./venv/bin/activate
pip3 install -r requirements.txt
Download VISTA masks, sample data, and model checkpoints used in the paper.
# download VISTA masks, sample data, and model checkpoints
curl -o data.tar.gz "https://drive.usercontent.google.com/download?id=195UYyNmVAak-QOQWrLB_j_2pJJMkt_tW&export=download&confirm=true"
# extract data
tar -xzvf data.tar.gz
In this section, we demonstrate the use of DCRA-Net on a fetal cardiac MRI dataset with sequences truncated to 32 frames, resized, and center-cropped to a resolution of
To explore oprions the scripts:
python3 test.py -h
python3 tratin.py -h
Evaluation of pretrained models.
# Lattice Underasmpling
python3 test.py --backbone DCRA-Net --dc_mode force --image_size 96 --n_frames 32 --representation_time frequency --in_channels 2 --out_channels 2 --batch_size 1 --save_dir DATADIR/evaluation_fetal_lattice --acceleration 8 --pattern lattice --data_dir DATADIR/sample_data/fetal --checkpoint_path DATADIR/model_checkpoints/dcranet_fetal_32-96-96_8x_lattice_checkpoint.pt --verbose
# VISTA Undersampling
python3 test.py --backbone DCRA-Net --dc_mode force --image_size 96 --n_frames 32 --representation_time frequency --in_channels 2 --out_channels 2 --batch_size 1 --save_dir DATADIR/evaluation_fetal_vista --acceleration 8 --pattern vista --mask_ucoef 07 --mask_dir DATADIR/vista_masks/96x32_acc8_07 --data_dir DATADIR --checkpoint_path DATADIR/model_checkpoints/dcranet_fetal_32-96-96_8x_vista_checkpoint.pt --verbose
The data is expected to be in a form of k-space sequences and to be stored as a directory of files.
The filename format is {patient_id}_{other-details}.hdf5
.
It is important to have the same patient_id
for sequences acquired from the same subject for valid train|val
split.
# Lattice Undersampling
python3 train.py --backbone DCRA-Net --dc_mode force --image_size 96 --n_frames 32 --representation_time frequency --in_channels 2 --out_channels 2 --batch_size 1 --start_epoch 0 --n_epochs 10 --save_dir DATADIR/new_version --acceleration 8 --pattern lattice --data_dir DATADIR/training_data --verbose
# VISTA Undersmapling
python3 train.py --backbone DCRA-Net --dc_mode force --image_size 96 --n_frames 32 --representation_time frequency --in_channels 2 --out_channels 2 --batch_size 1 --start_epoch 0 --n_epochs 10 --save_dir ./data/new_version --acceleration 8 --pattern vista --mask_ucoef 07 --mask_dir DATADIR/vista_masks/96x32_acc8_07 --data_dir DATADIR/training_data --verbose
If you use DCRA-Net in your project or find it useful, please, cite our paper as follows.
@article{prokopenko2024dcranet,
title={{DCRA-Net: Attention-Enabled Reconstruction Model for Dynamic Fetal Cardiac MRI}},
author={Prokopenko, Denis and Lloyd, David FA and Chiribiri, Amedeo and Rueckert, Daniel and Hajnal, Joseph V},
journal={arXiv preprint arXiv:2412.15342},
year={2024},
}