Deep auto-encoder-decoder network for medical image segmentation with state of the art results on skin lesion segmentation, lung segmentation, and retinal blood vessel segmentation. This method applies bidirectional convolutional LSTM layers in U-net structure to non-linearly encode both semantic and high-resolution information with non-linearly technique. Furthermore, it applies densely connected convolution layers to include collective knowledge in representation and boost convergence rate with batch normalization layers. If this code helps with your research please consider citing the following papers:
R. Azad, M. Asadi, Mahmood Fathy and Sergio Escalera "Bi-Directional ConvLSTM U-Net with Densely Connected Convolutions ", ICCV, 2019, download link.
M. Asadi, R. Azad, Mahmood Fathy and Sergio Escalera "Multi-level Context Gating of Embedded Collective Knowledge for Medical Image Segmentation", The first two authors contributed equally. arXiv:2003.05056, 2020, download link.
- July 20, 2020: SEDU model added to the Skin Lesion segmentation code (inside models.py), now you can use this model for higher performance on skin lesion segmentation, inside the train file call the SEDU_Net_D3 model).
- March 5, 2020: An extended version of the network has been released(Complete implemenation for SKin Lesion Segmentation on ISIC 217, Skin Lesion Segmentation PH2 Dataset and cell nuclei along with the network implementation will be update soon).
- December 4, 2019: Document Image Binarization using BCDU-Net on DIBCO Challenges has been implemented, best performance on DIBCO series link
- Augest 28, 2019: First release (Complete implemenation for SKin Lesion Segmentation on ISIC 218, Retina Blood Vessel Segmentation and Lung segmentation dataset added.)
- Augest 27, 2019: Paper Accepted in the ICCV workshop 2019 (Oral presentation).
This code has been implemented in python language using Keras libarary with tensorflow backend and tested in ubuntu OS, though should be compatible with related environment. following Environement and Library needed to run the code:
- Python 3
- Keras - tensorflow backend
For training deep model for each task, go to the related folder and follow the bellow steps:
1- Download the ISIC 2018 train dataset from this link and extract both training dataset and ground truth folders inside the dataset_isic18
.
2- Run Prepare_ISIC2018.py
for data preperation and dividing data to train,validation and test sets.
3- Run train_isic18.py
for training BCDU-Net model using trainng and validation sets. The model will be train for 100 epochs and it will save the best weights for the valiation set. You can also train U-net model for this dataset by changing model to unet, however, the performance will be low comparing to BCDU-Net.
4- For performance calculation and producing segmentation result, run evaluate.py
. It will represent performance measures and will saves related figures and results in output
folder.
1- Download Drive dataset from this link and extract both training and test folders in a folder name DRIVE (make a new folder with name DRIVE)
2- Run prepare_datasets_DRIVE.py
for reading whole data. This code will read all the train and test samples and will saves them as a hdf5 file in the DRIVE_datasets_training_testing
folder.
3- The next step is to extract random patches from the training set to train the model, to do so, Run save_patch.py
, it will extract random patches with size 64*64 and will save them as numpy file. This code will use help_functions.py
, spre_processing.py
and extract_patches.py
functions for data normalization and patch extraction.
4- For model training, run train_retina.py
, it will load the training data and will use 20% of training samples as a validation set. The model will be train for 50 epochs and it will save the best weights for the valiation set.
4- For performance calculation and producing segmentation result, run evaluate.py
. It will represent performance measures and will saves related figures and results in test
folder.
Note: For image pre-processing and patch extraction we used this github's code.
1- Download the Lung Segmentation dataset from Kaggle link and extract it.
2- Run Prepare_data.py
for data preperation, train/test seperation and generating new masks around the lung tissues.
3- Run train_lung.py
for training BCDU-Net model using trainng and validation sets (20 percent of the training set). The model will be train for 50 epochs and it will save the best weights for the valiation set. You can train either BCDU-net model with 1 or 3 densly connected convolutions.
4- For performance calculation and producing segmentation result, run evaluate_performance.py
. It will represent performance measures and will saves related figures and results.
For evaluating the performance of the proposed method, Two challenging task in medical image segmentaion has been considered. In bellow, results of the proposed approach illustrated.
In order to compare the proposed method with state of the art appraoches on retinal blood vessel segmentation, we considered Drive dataset.
Methods | Year | F1-scores | Sensivity | Specificaty | Accuracy | AUC |
---|---|---|---|---|---|---|
Chen etc. all Hybrid Features | 2014 | - | 0.7252 | 0.9798 | 0.9474 | 0.9648 |
Azzopardi et. all Trainable COSFIRE filters | 2015 | - | 0.7655 | 0.9704 | 0.9442 | 0.9614 |
Roychowdhury and et. all Three Stage Filtering | 2016 | - | 0.7250 | 0.9830 | 0.9520 | 0.9620 |
Liskowsk etc. allDeep Model | 2016 | - | 0.7763 | 0.9768 | 0.9495 | 0.9720 |
Qiaoliang et. all Cross-Modality Learning Approach | 2016 | - | 0.7569 | 0.9816 | 0.9527 | 0.9738 |
Ronneberger and et. all U-net | 2015 | 0.8142 | 0.7537 | 0.9820 | 0.9531 | 0.9755 |
Alom etc. all Recurrent Residual U-net | 2018 | 0.8149 | 0.7726 | 0.9820 | 0.9553 | 0.9779 |
Oktay et. all Attention U-net | 2018 | 0.8155 | 0.7751 | 0.9816 | 0.9556 | 0.9782 |
Alom et. all R2U-Net | 2018 | 0.8171 | 0.7792 | 0.9813 | 0.9556 | 0.9784 |
Azad et. all Proposed BCDU-Net | 2019 | 0.8222 | 0.8012 | 0.9784 | 0.9559 | 0.9788 |
Methods | Year | F1-scores | Sensivity | Specificaty | Accuracy | PC | JS |
---|---|---|---|---|---|---|---|
Ronneberger and etc. all U-net | 2015 | 0.647 | 0.708 | 0.964 | 0.890 | 0.779 | 0.549 |
Alom et. all Recurrent Residual U-net | 2018 | 0.679 | 0.792 | 0.928 | 0.880 | 0.741 | 0.581 |
Oktay et. all Attention U-net | 2018 | 0.665 | 0.717 | 0.967 | 0.897 | 0.787 | 0.566 |
Alom et. all R2U-Net | 2018 | 0.691 | 0.726 | 0.971 | 0.904 | 0.822 | 0.592 |
Azad et. all Proposed BCDU-Net | 2019 | 0.847 | 0.783 | 0.980 | 0.936 | 0.922 | 0.936 |
Azad et. all MCGU-Net | 2020 | 0.895 | 0.848 | 0.986 | 0.955 | 0.947 | 0.955 |
Methods | Year | F1-scores | Sensivity | Specificaty | Accuracy | AUC | JS |
---|---|---|---|---|---|---|---|
Ronneberger and etc. all U-net | 2015 | 0.9658 | 0.9696 | 0.9872 | 0.9872 | 0.9784 | 0.9858 |
Alom et. all Recurrent Residual U-net | 2018 | 0.9638 | 0.9734 | 0.9866 | 0.9836 | 0.9800 | 0.9836 |
Alom et. all R2U-Net | 2018 | 0.9832 | 0.9944 | 0.9832 | 0.9918 | 0.9889 | 0.9918 |
Azad et. all Proposed BCDU-Net | 2019 | 0.9904 | 0.9910 | 0.9982 | 0.9972 | 0.9946 | 0.9972 |
You can download the learned weights for each task in the following table.
Task | Dataset | Learned weights |
---|---|---|
Retina Blood Vessel Segmentation | Drive | BCDU_net_D3 |
Skin Lesion Segmentation | ISIC2018 | BCDU_net_D3 |
Lung Segmentation | Lung kaggle | BCDU_net_D3 |
All implementation done by Reza Azad. For any query please contact us for more information.
rezazad68@gmail.com