Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: NOEL-MNI/deepFCD
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v1.1.4
Choose a base ref
...
head repository: NOEL-MNI/deepFCD
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref

Commits on Aug 12, 2022

  1. sort imports

    ravnoor committed Aug 12, 2022
    Copy the full SHA
    f9ef4ce View commit details
  2. remove unused imports

    ravnoor committed Aug 12, 2022
    Copy the full SHA
    ce1649e View commit details

Commits on Sep 19, 2022

  1. format code using black

    ravnoor committed Sep 19, 2022
    Copy the full SHA
    27d4c35 View commit details
  2. update .gitignore

    ravnoor committed Sep 19, 2022
    Copy the full SHA
    7dda884 View commit details
  3. add helper Makefile

    ravnoor committed Sep 19, 2022
    Copy the full SHA
    2951a70 View commit details

Commits on Sep 20, 2022

  1. Copy the full SHA
    7bef950 View commit details
  2. update requirements.txt

    - add missing `psutil`
    ravnoor committed Sep 20, 2022
    Copy the full SHA
    d199ebf View commit details
  3. add more variables

    ravnoor committed Sep 20, 2022
    Copy the full SHA
    de615df View commit details
  4. bump deepMask to latest version

    - minor preprocessing bug fixes
    ravnoor committed Sep 20, 2022
    Copy the full SHA
    08c1db7 View commit details
  5. fix typo in variable name

    ravnoor committed Sep 20, 2022
    Copy the full SHA
    b20ada8 View commit details
  6. Copy the full SHA
    50075c4 View commit details
  7. bump tqdm version

    ravnoor committed Sep 20, 2022
    Copy the full SHA
    d84a397 View commit details

Commits on Sep 21, 2022

  1. Copy the full SHA
    e35bec0 View commit details
  2. remove undefined variable

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    a7f7295 View commit details
  3. add missing import

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    664bb22 View commit details
  4. install dependencies

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    61b2dba View commit details
  5. update path

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    fc044ab View commit details
  6. Copy the full SHA
    284b73a View commit details
  7. add missing py36

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    9ea0c70 View commit details
  8. rename

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    579c4f0 View commit details
  9. test docker image build

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    ed3ff6e View commit details
  10. remove py36

    - incompatible with `matplotlib==3.5.1`
    ravnoor committed Sep 21, 2022
    Copy the full SHA
    e5db195 View commit details
  11. add postprocessing

    ravnoor committed Sep 21, 2022
    Copy the full SHA
    800c373 View commit details

Commits on Sep 22, 2022

  1. Copy the full SHA
    ac60f8f View commit details
  2. add subcortical mask to filter out false positives

    - incorporate prior domain knowledge
    ravnoor committed Sep 22, 2022
    Copy the full SHA
    dee8a74 View commit details
  3. Copy the full SHA
    13af6df View commit details

Commits on Dec 23, 2022

  1. update requirements

    - missing older versions from PyPi
    ravnoor committed Dec 23, 2022
    Copy the full SHA
    c5bff1b View commit details
  2. Copy the full SHA
    b75e1bc View commit details
  3. rename

    ravnoor committed Dec 23, 2022
    Copy the full SHA
    7b6413e View commit details
  4. install jupyter kernel for notebooks

    - corresponds to the conda environment
    ravnoor committed Dec 23, 2022
    Copy the full SHA
    4f7d078 View commit details

Commits on Jan 5, 2023

  1. Merge pull request #13 from NOEL-MNI/reporting

    update final reporting and requirements (fixes gh-12)
    ravnoor authored Jan 5, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    4aba9f8 View commit details

Commits on Jan 23, 2023

  1. Copy the full SHA
    2b36b5b View commit details
  2. update dependencies [skip ci]

    ravnoor committed Jan 23, 2023
    Copy the full SHA
    26a739d View commit details

Commits on Feb 28, 2023

  1. improve reporting

    - annotate/cleanup script
    - update documentation
    ravnoor committed Feb 28, 2023
    Copy the full SHA
    6137ca4 View commit details

Commits on Mar 1, 2023

  1. document reporting script

    ravnoor committed Mar 1, 2023
    Copy the full SHA
    7345b79 View commit details

Commits on Mar 4, 2023

  1. add article DOI

    ravnoor committed Mar 4, 2023
    Copy the full SHA
    cc35951 View commit details
  2. add APA citation

    ravnoor committed Mar 4, 2023
    Copy the full SHA
    51ea98d View commit details
  3. fix typo [skip ci]

    ravnoor committed Mar 4, 2023
    Copy the full SHA
    bcfa957 View commit details

Commits on Mar 24, 2023

  1. Copy the full SHA
    07dbf95 View commit details

Commits on Sep 11, 2023

  1. add annotations

    ravnoor committed Sep 11, 2023
    Copy the full SHA
    fa5f470 View commit details
  2. Copy the full SHA
    a5aa335 View commit details
  3. Copy the full SHA
    6caf7b5 View commit details
  4. update Makefile

    ravnoor committed Sep 11, 2023
    Copy the full SHA
    836b73f View commit details
  5. Copy the full SHA
    4fe0a54 View commit details
  6. Copy the full SHA
    3e7ba2f View commit details
  7. update NOEL-MNI/deepMask

    ravnoor committed Sep 11, 2023
    Copy the full SHA
    ddd4346 View commit details
  8. update version tags

    ravnoor committed Sep 11, 2023
    Copy the full SHA
    eb21751 View commit details
  9. fix import

    - revert a5aa335
    ravnoor committed Sep 11, 2023
    Copy the full SHA
    e929331 View commit details

Commits on Sep 13, 2023

  1. bump tqdm version

    ravnoor committed Sep 13, 2023
    Copy the full SHA
    0188595 View commit details
  2. bump python to py38

    ravnoor committed Sep 13, 2023
    Copy the full SHA
    66e30dc View commit details
Showing with 2,308 additions and 618 deletions.
  1. +3 −1 .dockerignore
  2. +16 −0 .github/workflows/docker-image-build.yml
  3. +4 −4 .github/workflows/{docker-image.yml → docker-image-release.yml}
  4. +30 −0 .github/workflows/python-build.yml
  5. +56 −0 .github/workflows/test-inference-pipeline.yml
  6. +7 −1 .gitignore
  7. +8 −6 Dockerfile
  8. +101 −0 Makefile
  9. +63 −15 README.md
  10. +1 −1 app/deepMask
  11. +144 −73 app/inference.py
  12. +32 −23 app/models/model_builder.py
  13. +12 −7 app/models/noel_models_keras.py
  14. +59 −16 app/preprocess.py
  15. +1 −0 app/preprocess.sh
  16. +5 −2 app/requirements.txt
  17. BIN app/templates/subcortical_mask_v3.nii.gz
  18. +10 −6 app/train.py
  19. +446 −150 app/utils/base.py
  20. +74 −31 app/utils/bayes_uncertainty_utils.py
  21. +145 −0 app/utils/confidence.py
  22. +75 −36 app/utils/create_hdf5_patch_dataset.py
  23. +180 −58 app/utils/h5data.py
  24. +1 −1 app/utils/helpers.py
  25. +85 −71 app/utils/keras_bayes_utils.py
  26. +16 −15 app/utils/metrics.py
  27. +194 −63 app/utils/patch_dataloader.py
  28. +60 −38 app/utils/post_processor.py
  29. +20 −0 app/utils/read_h5data.py
  30. +78 −0 app/utils/reporting.py
  31. +68 −0 ci/runner.Dockerfile
  32. +29 −0 ci/runner.docker-compose.yml
  33. +29 −0 ci/start-runner.sh
  34. +12 −0 docs/reporting.md
  35. +9 −0 tests/run_tests.sh
  36. BIN ...egmentations/sub-00055/noel_deepFCD_dropoutMC/sub-00055_noel_deepFCD_dropoutMC_prob_mean_1.nii.gz
  37. BIN ...segmentations/sub-00055/noel_deepFCD_dropoutMC/sub-00055_noel_deepFCD_dropoutMC_prob_var_1.nii.gz
  38. BIN tests/segmentations/sub-00055/sub-00055_brain_mask_final.nii.gz
  39. BIN tests/segmentations/sub-00055/sub-00055_label_dilated_final.nii.gz
  40. +150 −0 tests/test_deepFCD.py
  41. +85 −0 tests/utils.py
4 changes: 3 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# ignore deepMask
# git clone <> in Dockerfile instead
app/deepMask
app/deepMask
memray*.bin
memray*.html
16 changes: 16 additions & 0 deletions .github/workflows/docker-image-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: Test Docker image build

on: [push, pull_request]

jobs:
push_to_registry:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v4

- name: Build Docker image
uses: docker/build-push-action@v5
with:
context: .
push: false
Original file line number Diff line number Diff line change
@@ -10,22 +10,22 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v2
uses: actions/checkout@v4

- name: Log in to Docker Hub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}

- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
uses: docker/metadata-action@v5
with:
images: noelmni/deep-fcd

- name: Build and push Docker image
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
uses: docker/build-push-action@v5
with:
context: .
push: true
30 changes: 30 additions & 0 deletions .github/workflows/python-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@

name: Test Python app dependencies

on: [push, pull_request]

jobs:
build:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8"]

steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8
pip install -r ./app/requirements.txt
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
56 changes: 56 additions & 0 deletions .github/workflows/test-inference-pipeline.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@

name: Testing Inference Pipeline

on: [push, pull_request]

env:
CI_TESTING: True

jobs:
build:

runs-on: self-hosted

steps:
- name: Check out the repo
uses: actions/checkout@v3
with:
submodules: recursive

- name: Install dependencies for deepMask
run: |
eval "$(conda shell.bash hook)"
conda create -n preprocess python=3.8
conda activate preprocess
python -m pip install -r ./app/deepMask/app/requirements.txt
conda deactivate
- name: Install dependencies for deepFCD
run: |
python -m pip install -r ./app/requirements.txt
conda install -c conda-forge pygpu==0.7.6
pip cache purge
- name: Download openneuro.org dataset to test the inference pipeline # https://openneuro.org/datasets/ds004199/versions/1.0.5
run: |
PATIENT_ID=sub-00055
BASE_URL=https://s3.amazonaws.com/openneuro.org/ds004199/${PATIENT_ID}/anat
mkdir -p ~/io/${PATIENT_ID}
echo "retrieving single-patient multimodal dataset.."
wget ${BASE_URL}/${PATIENT_ID}_acq-sag111_T1w.nii.gz\?versionId\=IKGWDiLR7B7ls2yPVyycJo.6R1Sqhujf -O ~/io/sub-00055/t1.nii.gz
wget ${BASE_URL}/${PATIENT_ID}_acq-tse3dvfl_FLAIR.nii.gz\?versionId\=HmzYoUuYkdbyd8jkpdJjVkZydRHNSqUX -O ~/io/sub-00055/flair.nii.gz
wget ${BASE_URL}/${PATIENT_ID}_acq-tse3dvfl_FLAIR_roi.nii.gz\?versionId\=ulmEU3nb8WCvGwcwTbkcdNSVr07PMPQN -O ~/io/sub-00055/label.nii.gz
- name: Run inference for deepFCD
run: |
./app/inference.py ${CI_TESTING_PATIENT_ID} t1.nii.gz flair.nii.gz ~/io cuda 1 1
env:
CI_TESTING_PATIENT_ID: "sub-00055"
CI_TESTING_GT: "./tests/segmentations/sub-00055/sub-00055_label_dilated_final.nii.gz"
CI_TESTING_PRED_DIR: "/home/ga/io"

- name: Run tests to compare outputs with previous validated runs
run: bash ./tests/run_tests.sh
env:
CI_TESTING_PATIENT_ID: "sub-00055"
CI_TESTING_PRED_DIR: "/home/ga/io"
8 changes: 7 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,8 @@
data
__pycache__
__pycache__
notebooks/
.ipynb_checkpoints/
*.csv
memray*.bin
memray*.html
.env
14 changes: 8 additions & 6 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
FROM noelmni/cuda:10.0-cudnn7-devel-ubuntu18.04
LABEL maintainer="Ravnoor Singh Gill <ravnoor@gmail.com>" \
org.opencontainers.image.title="deepFCD" \
org.opencontainers.image.description="Automated Detection of Focal Cortical Dysplasia using Deep Learning" \
@@ -35,17 +35,17 @@ USER user
ENV HOME=/home/user
RUN chmod 777 /home/user

RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh \
&& /bin/bash Miniconda3-py37_4.12.0-Linux-x86_64.sh -b -p /home/user/conda \
&& rm -f Miniconda3-py37_4.12.0-Linux-x86_64.sh
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py38_23.5.2-0-Linux-x86_64.sh \
&& /bin/bash Miniconda3-py38_23.5.2-0-Linux-x86_64.sh -b -p /home/user/conda \
&& rm Miniconda3-py38_23.5.2-0-Linux-x86_64.sh

RUN conda update -n base -c defaults conda
# RUN conda update -n base -c defaults conda

RUN git clone --depth 1 https://github.com/NOEL-MNI/deepMask.git \
&& rm -rf deepMask/.git

RUN eval "$(conda shell.bash hook)" \
&& conda create -n preprocess python=3.7 \
&& conda create -n preprocess python=3.8 \
&& conda activate preprocess \
&& python -m pip install -r deepMask/app/requirements.txt \
&& conda deactivate
@@ -58,6 +58,8 @@ RUN python -m pip install -r /app/requirements.txt \

COPY app/ /app/

COPY tests/ /tests/

RUN sudo chmod -R 777 /app && sudo chmod +x /app/inference.py

CMD ["python3"]
101 changes: 101 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
ACCOUNT := noelmni
SERVICE := deep-fcd
IMAGE := $(ACCOUNT)/$(SERVICE) # noelmni/deep-fcd
TAG := latest
UID := 2551
GID := 618
CASE_ID := sub-00055
TMPDIR := /host/hamlet/local_raid/data/ravnoor/sandbox
PRED_DIR := /host/hamlet/local_raid/data/ravnoor/sandbox/pytests
BRAIN_MASKING := 1
PREPROCESS := 1

.PHONY: all clean

build:
docker build -t $(ACCOUNT)/$(SERVICE):$(TAG) .

clean-build:
docker build -t $(ACCOUNT)/$(SERVICE):$(TAG) . --no-cache

test-pipeline:
./app/inference.py $(CASE_ID) t1.nii.gz flair.nii.gz $(TMPDIR) cuda0 $(BRAIN_MASKING) $(PREPROCESS)

memray-profiling:
python3 -m memray run ./app/inference.py $(CASE_ID) t1_brain.nii.gz t2_brain.nii.gz $(TMPDIR) cuda0 0 0

memray-profiling-cpu:
python3 -m memray run ./app/inference.py $(CASE_ID) t1_brain.nii.gz t2_brain.nii.gz $(TMPDIR) cpu 0 0

test-preprocess:
./app/preprocess.sh $(CASE_ID) t1.nii.gz flair.nii.gz $(TMPDIR) $(BRAIN_MASKING) $(PREPROCESS)

test-pipeline-docker:
docker run --rm -it --init \
--gpus=all \
--user="$(UID):$(GID)" \
--volume="$(TMPDIR):/tmp" \
$(ACCOUNT)/$(SERVICE):$(TAG) \
/app/inference.py $(CASE_ID) T1.nii.gz FLAIR.nii.gz /tmp cuda0 $(BRAIN_MASKING) $(PREPROCESS)

test-pipeline-docker_ci:
docker run --rm -it --init \
--gpus=all \
--user="$(UID):$(GID)" \
--volume="$(TMPDIR):/tmp" \
--env CI_TESTING=1 \
--env CI_TESTING_GT=/tmp/$(CASE_ID)/label_final_MD.nii.gz \
$(ACCOUNT)/$(SERVICE):$(TAG) \
/app/inference.py $(CASE_ID) T1.nii.gz FLAIR.nii.gz /tmp cuda0 $(BRAIN_MASKING) $(PREPROCESS)

test-pipeline-docker_testing:
docker run --rm -it --init \
--gpus=all \
--user="$(UID):$(GID)" \
--volume="$(PRED_DIR):/tmp" \
--env CI_TESTING=1 \
--env CI_TESTING_PATIENT_ID=$(CASE_ID) \
--env CI_TESTING_PRED_DIR=/tmp \
$(ACCOUNT)/$(SERVICE):$(TAG) \
bash /tests/run_tests.sh

test-reporting:
./app/utils/reporting.py $(CASE_ID) $(TMPDIR)/

install-jupyter-kernel:
python -m ipykernel install --user --name deepFCD

clean:
rm -rf $(TMPDIR)/$(CASE_ID)/{tmp,native,transforms}
rm -f $(TMPDIR)/$(CASE_ID)/{*_final,*denseCrf3d*,*_native,*_maskpred}.nii.gz

docker-clean:
docker run --rm -it --init \
--volume="$(TMPDIR):/tmp" \
busybox:latest \
rm -rf /tmp/$(CASE_ID)/{tmp,native,transforms,noel_deepFCD_dropoutMC} && \
rm -f /tmp/$(CASE_ID)/{*_final,*denseCrf3d*,*_native,*_maskpred}.nii.gz

prune:
docker image prune

runner-build:
docker-compose -f ci/runner.docker-compose.yml build

runner-ps:
docker-compose -f ci/runner.docker-compose.yml ps

runner-up:
docker-compose -f ci/runner.docker-compose.yml up --remove-orphans -d

runner-down:
docker-compose -f ci/runner.docker-compose.yml down

runner-logs:
docker-compose -f ci/runner.docker-compose.yml logs -f

runner-scale:
docker-compose -f ci/runner.docker-compose.yml up --scale runner=1 -d

runner-bash:
docker-compose -f ci/runner.docker-compose.yml exec -it runner bash
78 changes: 63 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -5,30 +5,35 @@

<p align="center">
<a href="https://www.python.org/">
<img src="https://img.shields.io/badge/Python-3.7-ff69b4.svg" /></a>
<img src="https://img.shields.io/badge/Python-3.7+-ff69b4.svg" /></a>
<a href= "https://keras.io/">
<img src="https://img.shields.io/badge/Keras-2.2.4-2BAF2B.svg" /></a>
<a href= "https://github.com/Theano/Theano">
<img src="https://img.shields.io/badge/Theano-1.0.4-2BAF2B.svg" /></a>
<a href= "https://github.com/NOEL-MNI/deepFCD/blob/main/LICENSE">
<img src="https://img.shields.io/badge/License-BSD%203--Clause-blue.svg" /></a>
<img src="https://img.shields.io/badge/License-BSD%203--Clause-cyan.svg" /></a>
<a href="https://doi.org/10.1212/WNL.0000000000012698">
<img src="https://img.shields.io/badge/DOI/article-10.1212%2FWNL.0000000000012698-blue" alt="DOI/article"></a>
<a href="https://doi.org/10.5281/zenodo.4521706">
<img src="https://zenodo.org/badge/DOI/10.5281/zenodo.4521706.svg" alt="DOI"></a>
<img src="https://img.shields.io/badge/DOI/data-10.5281%2Fzenodo.4521706-blue" alt="DOI/data"></a>
</p>


------------------------

![](assets/diagram.jpg)

### Please cite:
> Gill, R. S., Lee, H. M., Caldairou, B., Hong, S. J., Barba, C., Deleo, F., D'Incerti, L., Mendes Coelho, V. C., Lenge, M., Semmelroch, M., Schrader, D. V., Bartolomei, F., Guye, M., Schulze-Bonhage, A., Urbach, H., Cho, K. H., Cendes, F., Guerrini, R., Jackson, G., Hogan, R. E., … Bernasconi, A. (2021). Multicenter Validation of a Deep Learning Detection Algorithm for Focal Cortical Dysplasia. Neurology, 97(16), e1571–e1582. https://doi.org/10.1212/WNL.0000000000012698
OR

```TeX
@article{GillFCD2021,
title = {Multicenter Validated Detection of Focal Cortical Dysplasia using Deep Learning},
author = {Gill, Ravnoor Singh and Lee, Hyo-Min and Caldairou, Benoit and Hong, Seok-Jun and Barba, Carmen and Deleo, Francesco and D'Incerti, Ludovico and Coelho, Vanessa Cristina Mendes and Lenge, Matteo and Semmelroch, Mira and others},
journal = {Neurology},
year = {2021},
publisher = {Americal Academy of Neurology},
publisher = {American Academy of Neurology},
code = {\url{https://github.com/NOEL-MNI/deepFCD}},
doi = {https://doi.org/10.1212/WNL.0000000000012698}
}
@@ -37,12 +42,12 @@
## Pre-requisites
```bash
0. Anaconda Python Environment
1. Python == 3.7.x
1. Python == 3.8
2. Keras == 2.2.4
3. Theano == 1.0.4
4. ANTsPy == 0.3.2 (for MRI preprocessing)
4. ANTsPyNet == 0.1.8 (for MRI preprocessing)
5. PyTorch == 1.11.0 (for deepMask)
4. ANTsPy == 0.4.2 (for MRI preprocessing)
4. ANTsPyNet == 0.2.3 (for deepMask)
5. PyTorch == 1.8.2 LTS (for deepMask)
6. h5py == 2.10.0
+ app/requirements.txt
+ app/deepMask/app/requirements.txt
@@ -60,14 +65,14 @@ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/
bash ~/miniconda.sh -b -p $HOME/miniconda

# create and activate a Conda environment for preprocessing
conda create -n preprocess python=3.7
conda create -n preprocess python=3.8
conda activate preprocess
# install dependencies using pip
python -m pip install -r app/deepMask/app/requirements.txt
conda deactivate

# create and activate a Conda environment for deepFCD
conda create -n deepFCD python=3.7
conda create -n deepFCD python=3.8
conda activate deepFCD
# install dependencies using pip
python -m pip install -r app/requirements.txt
@@ -108,6 +113,10 @@ export OMP_NUM_THREADS=6 \ # specify number of threads to initialize when usi
1 \ # perform (`1`) or not perform (`0`) image pre-processing

```
#### example:
```bash
./app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cpu 1 1
```

### 3.2 Inference (GPU)
```bash
@@ -122,13 +131,17 @@ chmod +x ./app/inference.py # make the script executable -ensure you have the
1 \ # perform (`1`) or not perform (`0`) image pre-processing

```
#### example:
```bash
./app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cuda0 1 1
```

### 3.3 Inference using Docker (GPU), requires [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
```bash
docker run --rm -it --init \
--gpus=all \ # expose the host GPUs to the guest docker container
--user="$(id -u):$(id -g)" \ # map user permissions appropriately
--volume="$PWD:/io" \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
--volume="${IO_DIRECTORY}:/io" \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
noelmni/deep-fcd:latest \ # docker image containing all the necessary software dependencies
/app/inference.py \ # the script to perform inference on the multimodal MRI images
${PATIENT_ID} \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
@@ -139,12 +152,16 @@ docker run --rm -it --init \
1 \ # perform (`1`) or not perform (`0`) brain extraction
1 \ # perform (`1`) or not perform (`0`) image pre-processing
```
#### example:
```bash
docker run --rm -it --init --gpus=all --volume=$PWD/io:/io noelmni/deep-fcd:latest /app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cuda0 1 1
```

### 3.4 Inference using Docker (CPU)
```bash
docker run --rm -it --init \
--user="$(id -u):$(id -g)" \ # map user permissions appropriately
--volume="$PWD:/io" \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
--volume="${IO_DIRECTORY}:/io" \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
--env OMP_NUM_THREADS=6 \ # specify number of threads to initialize - by default this variable is set to half the number of available logical cores
noelmni/deep-fcd:latest \ # docker image containing all the necessary software dependencies
/app/inference.py \ # the script to perform inference on the multimodal MRI images
@@ -156,10 +173,41 @@ docker run --rm -it --init \
1 \ # perform (`1`) or not perform (`0`) brain extraction
1 \ # perform (`1`) or not perform (`0`) image pre-processing
```
#### example:
```bash
docker run --rm -it --init --env OMP_NUM_THREADS=6 --volume=$PWD/io:/io noelmni/deep-fcd:latest /app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cpu 1 1
```

## 4. Reporting
[example output](docs/reporting.md)

### 4.1 Reporting output
```bash
chmod +x ./app/utils/reporting.py
./app/utils/reporting.py ${PATIENT_ID} ${IO_DIRECTORY}
```
#### example:
```bash
./app/utils/reporting.py FCD_001 /io
```

### 4.2 Reporting output using Docker
```bash
docker run --rm -it --init \
--user="$(id -u):$(id -g)"
--volume="${IO_DIRECTORY}:/io" noelmni/deep-fcd:latest
/app/utils/reporting.py ${PATIENT_ID} /io
```
#### example:
```bash
docker run --rm -it --init --gpus=all --volume=$PWD/io:/io noelmni/deep-fcd:latest /app/utils/reporting.py FCD_001 /io
```


## License
<a href= "https://opensource.org/licenses/BSD-3-Clause"><img src="https://img.shields.io/badge/License-BSD%203--Clause-blue.svg" /></a>

```console
Copyright 2021 Neuroimaging of Epilepsy Laboratory, McGill University
```
Copyright 2023 Neuroimaging of Epilepsy Laboratory, McGill University
```

217 changes: 144 additions & 73 deletions app/inference.py
Original file line number Diff line number Diff line change
@@ -1,51 +1,63 @@
#!/usr/bin/env python3

import os
import sys
import logging
import multiprocessing
from mo_dots import Data
import os
import subprocess
from config.experiment import options
import sys
import warnings
warnings.filterwarnings('ignore')

from mo_dots import Data

from config.experiment import options

warnings.filterwarnings("ignore")
import time

import numpy as np
import setproctitle as spt
from tqdm import tqdm

from utils.helpers import *

logging.basicConfig(level=logging.DEBUG,
style='{',
datefmt='%Y-%m-%d %H:%M:%S',
format='{asctime} {levelname} {filename}:{lineno}: {message}')
logging.basicConfig(
level=logging.DEBUG,
style="{",
datefmt="%Y-%m-%d %H:%M:%S",
format="{asctime} {levelname} {filename}:{lineno}: {message}",
)

os.environ["KERAS_BACKEND"] = "theano"

# GPU/CPU options
options['cuda'] = sys.argv[5] # cpu, cuda, cuda0, cuda1, or cudaX: flag using gpu 1 or 2
if options['cuda'].startswith('cuda1'):
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cuda1,floatX=float32,dnn.enabled=False"
elif options['cuda'].startswith('cpu'):
options["cuda"] = sys.argv[5]
# cpu, cuda, cuda0, cuda1, or cudaX: flag using gpu 1 or 2
if options["cuda"].startswith("cuda1"):
os.environ[
"THEANO_FLAGS"
] = "mode=FAST_RUN,device=cuda1,floatX=float32,dnn.enabled=False"
elif options["cuda"].startswith("cpu"):
cores = str(multiprocessing.cpu_count() // 2)
var = os.getenv('OMP_NUM_THREADS', cores)
var = os.getenv("OMP_NUM_THREADS", cores)
try:
logging.info("# of threads initialized: {}".format(int(var)))
except ValueError:
raise TypeError("The environment variable OMP_NUM_THREADS"
" should be a number, got '%s'." % var)
raise TypeError(
"The environment variable OMP_NUM_THREADS"
" should be a number, got '%s'." % var
)
# os.environ['openmp'] = 'True'
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cpu,openmp=True,floatX=float32"
else:
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cuda0,floatX=float32,dnn.enabled=False"
logging.info(os.environ["THEANO_FLAGS"])

from models.noel_models_keras import *
from keras.models import load_model
from keras import backend as K
from utils.metrics import *
from utils.base import *
from keras.models import load_model

from models.noel_models_keras import *
from utils.base import *
from utils.metrics import *

# configuration
args = Data()
@@ -56,51 +68,76 @@
if not os.path.isabs(args.dir):
args.dir = os.path.abspath(args.dir)

args.brain_masking = int(sys.argv[6]) # set to True or any non-zero value for brain extraction or skull-removal, False otherwise
args.preprocess = int(sys.argv[7]) # co-register T1 and T2 images to MNI152 space and N3 correction before brain extraction (True/False)
args.brain_masking = int(sys.argv[6])
# set to True or any non-zero value for brain extraction or skull-removal, False otherwise
args.preprocess = int(sys.argv[7])
# co-register T1 and T2 images to MNI152 space and N3 correction before brain extraction (True/False)
args.outdir = os.path.join(args.dir, args.id)

args.t1 = os.path.join(args.outdir, args.t1_fname)
args.t2 = os.path.join(args.outdir, args.t2_fname)

args.t1_orig, args.t2_orig = args.t1, args.t2

cwd = os.path.realpath(os.path.dirname(__file__))

if bool(args.brain_masking):
if options['cuda'].startswith('cuda'):
if options["cuda"].startswith("cuda"):
args.use_gpu = True
else:
args.use_gpu = False
# MRI pre-processing configuration
args.output_suffix = '_brain_final.nii.gz'

preprocess_sh = os.path.join(cwd, 'preprocess.sh')
subprocess.check_call([preprocess_sh, args.id, args.t1_fname, args.t2_fname, args.dir, bool2str(args.preprocess), bool2str(args.use_gpu)])

args.t1 = os.path.join(args.outdir, args.id + '_t1' + args.output_suffix)
args.t2 = os.path.join(args.outdir, args.id + '_t2' + args.output_suffix)
args.output_suffix = "_brain_final.nii.gz"

preprocess_sh = os.path.join(cwd, "preprocess.sh")
subprocess.check_call(
[
preprocess_sh,
args.id,
args.t1_fname,
args.t2_fname,
args.dir,
bool2str(args.preprocess),
bool2str(args.use_gpu),
]
)

args.t1 = os.path.join(args.outdir, args.id + "_t1" + args.output_suffix)
args.t2 = os.path.join(args.outdir, args.id + "_t2" + args.output_suffix)
else:
logging.info('Skipping image preprocessing and brain masking, presumably images are co-registered, bias-corrected, and skull-stripped')
logging.info(
"Skipping image preprocessing and brain masking, presumably images are co-registered, bias-corrected, and skull-stripped"
)

if os.environ.get("CI_TESTING") is not None:
options["CI_TESTING_GT"] = os.environ.get("CI_TESTING_GT")
print("CI environment initialized: {}".format(options["CI_TESTING_GT"]))
mask = ants.image_read(options["CI_TESTING_GT"])
t1, t2 = ants.image_read(args.t1), ants.image_read(args.t2)
ants.mask_image(t1, mask, level=1, binarize=False).to_filename(args.t1)
ants.mask_image(t2, mask, level=1, binarize=False).to_filename(args.t2)

# deepFCD configuration
K.set_image_dim_ordering('th')
K.set_image_data_format('channels_first') # TH dimension ordering in this code
K.set_image_dim_ordering("th")
K.set_image_data_format("channels_first") # TH dimension ordering in this code

options['parallel_gpu'] = False
modalities = ['T1', 'FLAIR']
x_names = options['x_names']
options["parallel_gpu"] = False
modalities = ["T1", "FLAIR"]
x_names = options["x_names"]

# seed = options['seed']
options['dropout_mc'] = True
options['batch_size'] = 350000
options['mini_batch_size'] = 2048
options['load_checkpoint_1'] = True
options['load_checkpoint_2'] = True
options["dropout_mc"] = True
options["batch_size"] = 350000
options["mini_batch_size"] = 2048
options["load_checkpoint_1"] = True
options["load_checkpoint_2"] = True

# trained model weights based on 148 histologically-verified FCD subjects
options['test_folder'] = args.dir
options['weight_paths'] = os.path.join(cwd, 'weights')
options['experiment'] = 'noel_deepFCD_dropoutMC'
logging.info("experiment: {}".format(options['experiment']))
spt.setproctitle(options['experiment'])
options["test_folder"] = args.dir
options["weight_paths"] = os.path.join(cwd, "weights")
options["experiment"] = "noel_deepFCD_dropoutMC"
logging.info("experiment: {}".format(options["experiment"]))
spt.setproctitle(options["experiment"])

# --------------------------------------------------
# initialize the CNN
@@ -110,12 +147,24 @@
# initialize the CNN architecture
model = off_the_shelf_model(options)

load_weights = os.path.join(options['weight_paths'], 'noel_deepFCD_dropoutMC_model_1.h5')
logging.info("loading DNN1, model[0]: {} exists".format(load_weights)) if os.path.isfile(load_weights) else sys.exit("model[0]: {} doesn't exist".format(load_weights))
load_weights = os.path.join(
options["weight_paths"], "noel_deepFCD_dropoutMC_model_1.h5"
)
logging.info(
"loading DNN1, model[0]: {} exists".format(load_weights)
) if os.path.isfile(load_weights) else sys.exit(
"model[0]: {} doesn't exist".format(load_weights)
)
model[0] = load_model(load_weights)

load_weights = os.path.join(options['weight_paths'], 'noel_deepFCD_dropoutMC_model_2.h5')
logging.info("loading DNN2, model[1]: {} exists".format(load_weights)) if os.path.isfile(load_weights) else sys.exit("model[1]: {} doesn't exist".format(load_weights))
load_weights = os.path.join(
options["weight_paths"], "noel_deepFCD_dropoutMC_model_2.h5"
)
logging.info(
"loading DNN2, model[1]: {} exists".format(load_weights)
) if os.path.isfile(load_weights) else sys.exit(
"model[1]: {} doesn't exist".format(load_weights)
)
model[1] = load_model(load_weights)
logging.info(model[1].summary())

@@ -129,55 +178,77 @@
t1_file = args.t1
t2_file = args.t2

t1_transform = os.path.join(args.outdir, "transforms", args.id + "_t1-native-to-MNI152.mat")
t2_transform = os.path.join(args.outdir, "transforms", args.id + "_t2-native-to-MNI152.mat")
t1_transform = os.path.join(
args.outdir, "transforms", args.id + "_t1-native-to-MNI152.mat"
)
t2_transform = os.path.join(
args.outdir, "transforms", args.id + "_t2-native-to-MNI152.mat"
)

files = [args.t1, args.t2]

orig_files = {'T1':args.t1,'FLAIR':args.t2}
orig_files = {"T1": args.t1_orig, "FLAIR": args.t2_orig}

transform_files = [t1_transform, t2_transform]
# files = {}
# files['T1'], files['FLAIR'] = str(t1_file), t2_file
test_data = {}
# test_data = {f: {m: os.path.join(tfolder, f, m+'_stripped.nii.gz') for m in modalities} for f in test_list}
test_data = {f: {m: os.path.join(options['test_folder'], f, n) for m, n in zip(modalities, files)} for f in test_list}
test_tranforms = {f: {m: n for m, n in zip(modalities, transform_files)} for f in test_list}
test_data = {
f: {
m: os.path.join(options["test_folder"], f, n) for m, n in zip(modalities, files)
}
for f in test_list
}
test_transforms = {
f: {m: n for m, n in zip(modalities, transform_files)} for f in test_list
}
# test_data = {f: {m: os.path.join(options['test_folder'], f, n) for m, n in zip(modalities, files)} for f in test_list}

for _, scan in enumerate(tqdm(test_list, desc='serving predictions using the trained model', colour='blue')):
for _, scan in enumerate(
tqdm(test_list, desc="serving predictions using the trained model", colour="blue")
):
t_data = {}
t_data[scan] = test_data[scan]
transforms = {}
transforms[scan] = test_tranforms[scan]
transforms[scan] = test_transforms[scan]

options['pred_folder'] = os.path.join(options['test_folder'], scan, options['experiment'])
if not os.path.exists(options['pred_folder']):
os.mkdir(options['pred_folder'])
options["pred_folder"] = os.path.join(
options["test_folder"], scan, options["experiment"]
)
if not os.path.exists(options["pred_folder"]):
os.mkdir(options["pred_folder"])

pred_mean_fname = os.path.join(options['pred_folder'], scan + '_prob_mean_1.nii.gz')
pred_var_fname = os.path.join(options['pred_folder'], scan + '_prob_var_1.nii.gz')
pred_mean_fname = os.path.join(options["pred_folder"], scan + "_prob_mean_1.nii.gz")
pred_var_fname = os.path.join(options["pred_folder"], scan + "_prob_var_1.nii.gz")

if np.logical_and(os.path.isfile(pred_mean_fname), os.path.isfile(pred_var_fname)):
logging.info("prediction for {} already exists".format(scan))
continue

options['test_scan'] = scan
options["test_scan"] = scan

start = time.time()
logging.info('\n')
logging.info('-'*70)
logging.info("\n")
logging.info("-" * 70)
logging.info("testing the model for scan: {}".format(scan))
logging.info('-'*70)
logging.info("-" * 70)

# if transform(s) do not exist (i.e., no preprocessing done), then skip (see base.py#L412)
if not any([os.path.exists(transforms[scan]["T1"]), os.path.exists(transforms[scan]["FLAIR"])]):
transforms = None

# test0: prediction/stage1
# test1: pred/stage2
# test2: morphological processing + contiguous clusters
# pred0, pred1, postproc, _, _ = test_model(model, t_data, options)
test_model(model, t_data, options, transforms=transforms, orig_files=orig_files, invert_xfrm=True)
test_model(
model,
t_data,
options,
transforms=transforms,
orig_files=orig_files,
invert_xfrm=True,
)

end = time.time()
diff = (end - start) // 60
logging.info("-"*70)
logging.info("-" * 70)
logging.info("time elapsed: ~ {} minutes".format(diff))
logging.info("-"*70)
logging.info("-" * 70)
55 changes: 32 additions & 23 deletions app/models/model_builder.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
from keras.models import Model
from keras.layers import Input, Conv3D, MaxPooling3D, Dropout, Flatten, Activation
from keras.layers.convolutional import MaxPooling3D, Conv3D
from keras import backend as K
from keras.layers import Activation, Conv3D, Dropout, Flatten, Input, MaxPooling3D
from keras.layers.convolutional import Conv3D, MaxPooling3D
from keras.layers.normalization import BatchNormalization
from keras.models import Model

# from keras.layers.advanced_activations import PReLU, LeakyReLU

from keras import backend as K
K.set_image_dim_ordering('th')
K.set_image_dim_ordering("th")
import warnings
warnings.filterwarnings('ignore')

warnings.filterwarnings("ignore")


def off_the_shelf(input, options):
@@ -17,55 +19,62 @@ def off_the_shelf(input, options):
channel_axis = -1

# base_filters = 48
base_filters = options['base_filters']
base_filters = options["base_filters"]

c1 = Conv3D(base_filters, (3, 3, 3), border_mode='same', activation=options['activation'])(input)
c1 = Conv3D(
base_filters, (3, 3, 3), border_mode="same", activation=options["activation"]
)(input)
# c1 = Conv3D(base_filters, (3, 3, 3), border_mode='same')(input)
# p1 = LeakyReLU()(c1)
b1 = BatchNormalization(axis=channel_axis)(c1)
if options['dropout_mc']:
b1 = Dropout(options['dropout_1'])(b1)
if options["dropout_mc"]:
b1 = Dropout(options["dropout_1"])(b1)
m1 = MaxPooling3D((2, 2, 2), strides=(2, 2, 2))(b1)

c2 = Conv3D(base_filters*2, (3, 3, 3), border_mode='same', activation=options['activation'])(m1)
c2 = Conv3D(
base_filters * 2,
(3, 3, 3),
border_mode="same",
activation=options["activation"],
)(m1)
# c2 = Conv3D(base_filters*2, (3, 3, 3), border_mode='same')(m1)
# p2 = LeakyReLU()(c2)
b2 = BatchNormalization(axis=channel_axis)(c2)
if options['dropout_mc']:
b2 = Dropout(options['dropout_2'])(b2)
if options["dropout_mc"]:
b2 = Dropout(options["dropout_2"])(b2)
m2 = MaxPooling3D((2, 2, 2), strides=(2, 2, 2))(b2)

return m2


def create_off_the_shelf(options):
'''
"""
Creates a custom off-the-shelf CNN
:param nb_classes: number of classes
:return: Keras Model with 1 input (patch_size) and 1 output
'''
nb_classes = options['nb_classes']
channels = options['channels']
shape = options['patch_size']
"""
nb_classes = options["nb_classes"]
channels = options["channels"]
shape = options["patch_size"]

if K.image_dim_ordering() == 'th':
if K.image_dim_ordering() == "th":
init = Input((channels, shape[0], shape[1], shape[2]))
else:
init = Input((shape[0], shape[1], shape[2], channels))

x = off_the_shelf(init, options)

# Dropout
x = Dropout(options['dropout_3'])(x)
x = Dropout(options["dropout_3"])(x)

x = Conv3D(nb_classes, (3, 3, 3), border_mode='same')(x)
x = Conv3D(nb_classes, (3, 3, 3), border_mode="same")(x)
x = MaxPooling3D((4, 4, 4))(x)

x = Flatten()(x)

# Output
out = Activation('softmax')(x)
out = Activation("softmax")(x)

model = Model(init, output=[out], name='off_the_shelf')
model = Model(init, output=[out], name="off_the_shelf")

return model
19 changes: 12 additions & 7 deletions app/models/noel_models_keras.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
from models.model_builder import *
from keras.utils import multi_gpu_model
from keras.optimizers import Adadelta
from keras import losses
from keras.optimizers import Adadelta
from keras.utils import multi_gpu_model

from models.model_builder import *


def off_the_shelf_model(options):
@@ -19,16 +20,20 @@ def off_the_shelf_model(options):
# first model
# --------------------------------------------------
model_1 = create_off_the_shelf(options)
if options['parallel_gpu']:
if options["parallel_gpu"]:
model_1 = multi_gpu_model(model_1, gpus=2)
model_1.compile(optimizer=Adadelta(), loss=losses.binary_crossentropy, metrics=['accuracy'])
model_1.compile(
optimizer=Adadelta(), loss=losses.binary_crossentropy, metrics=["accuracy"]
)

# --------------------------------------------------
# second model
# --------------------------------------------------
model_2 = create_off_the_shelf(options)
if options['parallel_gpu']:
if options["parallel_gpu"]:
model_2 = multi_gpu_model(model_2, gpus=2)
model_2.compile(optimizer=Adadelta(), loss=losses.binary_crossentropy, metrics=['accuracy'])
model_2.compile(
optimizer=Adadelta(), loss=losses.binary_crossentropy, metrics=["accuracy"]
)

return [model_1, model_2]
75 changes: 59 additions & 16 deletions app/preprocess.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,51 @@
import os
from mo_dots import to_data
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser

import psutil
import torch
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
from mo_dots import to_data

import deepMask.app.vnet as vnet
from deepMask.app.utils.data import *
from deepMask.app.utils.deepmask import *
from deepMask.app.utils.image_processing import noelImageProcessor
import deepMask.app.vnet as vnet

# configuration
# parse command line arguments
parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--id", dest='id', default="FCD_123", help="Alphanumeric patient code")
parser.add_argument("-t1", "--t1_fname", dest='t1_fname', default="t1.nii.gz", help="T1-weighted image")
parser.add_argument("-t2", "--t2_fname", dest='t2_fname', default="t2.nii.gz", help="T2-weighted image")
parser.add_argument("-d", "--dir", dest='dir', default="data/", help="Directory containing the input images")
parser.add_argument(
"-i", "--id", dest="id", default="FCD_123", help="Alphanumeric patient code"
)
parser.add_argument(
"-t1", "--t1_fname", dest="t1_fname", default="t1.nii.gz", help="T1-weighted image"
)
parser.add_argument(
"-t2", "--t2_fname", dest="t2_fname", default="t2.nii.gz", help="T2-weighted image"
)
parser.add_argument(
"-d",
"--dir",
dest="dir",
default="data/",
help="Directory containing the input images",
)

parser.add_argument("-p", "--preprocess", dest='preprocess', action='store_true', help="Co-register and perform non-uniformity correction of input images")
parser.add_argument("-g", "--use_gpu", dest='use_gpu', action='store_true', help="Compute using GPU, defaults to using CPU")
parser.add_argument(
"-p",
"--preprocess",
dest="preprocess",
action="store_true",
help="Co-register and perform non-uniformity correction of input images",
)
parser.add_argument(
"-g",
"--use_gpu",
dest="use_gpu",
action="store_true",
help="Compute using GPU, defaults to using CPU",
)
args = to_data(vars(parser.parse_args()))

# set up parameters
args.outdir = os.path.join(args.dir, args.id)
args.tmpdir = os.path.join(args.outdir, "tmp")
@@ -33,14 +59,18 @@

# trained weights based on manually corrected masks from
# 153 patients with cortical malformations
args.inference = os.path.join(cwd, 'deepMask/app/weights', 'vnet_masker_model_best.pth.tar')
args.inference = os.path.join(
cwd, "deepMask/app/weights", "vnet_masker_model_best.pth.tar"
)
# resize all input images to this resolution matching training data
args.resize = (160,160,160)
args.resize = (160, 160, 160)
args.cuda = torch.cuda.is_available() and args.use_gpu
torch.manual_seed(args.seed)
args.device_ids = list(range(torch.cuda.device_count()))

mem_size = psutil.virtual_memory().available // (1024*1024*1024) # available RAM in GB
mem_size = psutil.virtual_memory().available // (
1024 * 1024 * 1024
) # available RAM in GB
# mem_size = 32
if mem_size < 64 and not args.use_gpu:
os.environ["BRAIN_MASKING"] = "cpu"
@@ -54,9 +84,22 @@
print("build vnet, using CPU")
model = vnet.build_model(args)

template = os.path.join(cwd, 'deepMask/app/template', 'mni_icbm152_t1_tal_nlin_sym_09a.nii.gz')
template = os.path.join(
cwd, "deepMask/app/template", "mni_icbm152_t1_tal_nlin_sym_09a.nii.gz"
)

# MRI pre-processing configuration
args.output_suffix = '_brain_final.nii.gz'
args.output_suffix = "_brain_final.nii.gz"

noelImageProcessor(id=args.id, t1=args.t1, t2=args.t2, output_suffix=args.output_suffix, output_dir=args.outdir, template=template, usen3=True, args=args, model=model, preprocess=args.preprocess).pipeline()
noelImageProcessor(
id=args.id,
t1=args.t1,
t2=args.t2,
output_suffix=args.output_suffix,
output_dir=args.outdir,
template=template,
usen3=True,
args=args,
model=model,
preprocess=args.preprocess,
).pipeline()
1 change: 1 addition & 0 deletions app/preprocess.sh
Original file line number Diff line number Diff line change
@@ -25,6 +25,7 @@ OUTDIR=${BASEDIR}/${ID}/ # args.outdir = os.path.join(args.dir, args.id)

PWD=$(dirname "$0")

conda=${CONDA_EXE}
eval "$(conda shell.bash hook)"
conda activate preprocess
# echo $CONDA_PREFIX
7 changes: 5 additions & 2 deletions app/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
antspyx==0.3.2
antspyx==0.4.2 --only-binary=antspyx
git+https://github.com/ravnoor/atlasreader@master#egg=atlasreader
Theano==1.0.4
keras==2.2.4
h5py==2.10.0
@@ -8,9 +9,11 @@ nibabel==3.2.2
nilearn==0.9.1
numpy==1.21.6
pandas==1.3.5
psutil==5.9.2
scikit-image==0.19.2
scikit-learn==1.0.2
scipy==1.7.3
setproctitle==1.2.3
tqdm==4.62.3
tabulate==0.9.0
tqdm==4.65.0
xlrd==2.0.1
Binary file added app/templates/subcortical_mask_v3.nii.gz
Binary file not shown.
16 changes: 10 additions & 6 deletions app/train.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
#!/usr/bin/env python3

import os, sys, socket, time, json
import json
import multiprocessing
import os
import socket
import sys
import time

from config.experiment import options

hostname = socket.getfqdn()
@@ -27,16 +32,15 @@
print(os.environ["THEANO_FLAGS"])

import numpy as np
from nibabel import load as load_nii
import pandas as pd
import setproctitle as spt
from keras import backend as K
from nibabel import load as load_nii
from tqdm import tqdm

from models.noel_models_keras import *
from keras import backend as K

from utils.metrics import *
from utils.base import *
from utils.metrics import *

K.set_image_dim_ordering("th")
K.set_image_data_format("channels_first") # TH dimension ordering in this code
@@ -250,4 +254,4 @@
diff = end - start
print("=" * 80)
print("time elapsed: ~ {} seconds".format(diff))
print("=" * 80)
print("=" * 80)
596 changes: 446 additions & 150 deletions app/utils/base.py

Large diffs are not rendered by default.

105 changes: 74 additions & 31 deletions app/utils/bayes_uncertainty_utils.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,25 @@
import os

import nibabel as nib
import numpy as np
from tqdm import trange
from pynvml import *
from nibabel import load as load_nii
import nibabel as nib
from pynvml import *
from tqdm import trange

from utils.patch_dataloader import *

def test_scan_uncertainty(model, test_x_data, scan, options, intermediate=None, save_nifti=False, uncertainty=True, candidate_mask=None, T=20):

def test_scan_uncertainty(
model,
test_x_data,
scan,
options,
intermediate=None,
save_nifti=False,
uncertainty=True,
candidate_mask=None,
T=20,
):
"""
Test data based on one model
Input:
@@ -23,17 +35,17 @@ def test_scan_uncertainty(model, test_x_data, scan, options, intermediate=None,
nvmlInit()
handle = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(handle)
bsize = info.total/1024/1024
bsize = info.total / 1024 / 1024
# print "total GPU memory available: %d MB" % (bsize)
if bsize < 2000:
batch_size = 384
print("reducing batch_size to : {}".format(batch_size))
options['batch_size'] = 100352
options["batch_size"] = 100352
else:
if options['hostname'].startswith("hamlet"):
if options["hostname"].startswith("hamlet"):
# batch_size = 2200
batch_size = 3000
options['batch_size'] = 350000
options["batch_size"] = 350000
else:
# batch_size = 2800
batch_size = 2000
@@ -43,7 +55,7 @@ def test_scan_uncertainty(model, test_x_data, scan, options, intermediate=None,
tmp[scan] = test_x_data
test_x_data = tmp
scans = test_x_data.keys()
flair_scans = [test_x_data[s]['FLAIR'] for s in scans]
flair_scans = [test_x_data[s]["FLAIR"] for s in scans]
flair_image = load_nii(flair_scans[0]).get_data()
header = load_nii(flair_scans[0]).header
# affine = header.get_qform()
@@ -53,19 +65,31 @@ def test_scan_uncertainty(model, test_x_data, scan, options, intermediate=None,

# get test paths
_, scan = os.path.split(flair_scans[0])
test_folder = os.path.join('/host/silius/local_raid/ravnoor/01_Projects/55_Bayesian_DeepLesion_LoSo/data/predictions', options['experiment'])
test_folder = os.path.join(
"/host/silius/local_raid/ravnoor/01_Projects/55_Bayesian_DeepLesion_LoSo/data/predictions",
options["experiment"],
)
# test_folder = '/host/silius/local_raid/ravnoor/01_Projects/06_DeepLesion_LoSo/data/predictions
if not os.path.exists(test_folder):
# os.path.join(test_folder, options['experiment'])
os.mkdir(test_folder)

print('-'*60)
print(str.replace(scan, '_flair.nii.gz', ''))
print('-'*60)
print("-" * 60)
print(str.replace(scan, "_flair.nii.gz", ""))
print("-" * 60)
# compute lesion segmentation in batches of size options['batch_size']
for batch, centers in load_test_patches(test_x_data, options, options['patch_size'], options['batch_size'], options['min_th'], candidate_mask):
for batch, centers in load_test_patches(
test_x_data,
options,
options["patch_size"],
options["batch_size"],
options["min_th"],
candidate_mask,
):
print("predicting uncertainty")
y_pred, y_pred_var = predict_uncertainty(model, batch, batch_size=batch_size, T=T)
y_pred, y_pred_var = predict_uncertainty(
model, batch, batch_size=batch_size, T=T
)
[x, y, z] = np.stack(centers, axis=1)
seg_image[x, y, z] = y_pred[:, 1]
var_image[x, y, z] = y_pred_var[:, 1]
@@ -76,20 +100,20 @@ def test_scan_uncertainty(model, test_x_data, scan, options, intermediate=None,
os.mkdir(test_folder)
# out_scan = nib.Nifti1Image(seg_image, np.eye(4))
out_scan = nib.Nifti1Image(seg_image, header=header)
test_name = str.replace(scan, '_flair.nii.gz', '') + '_out_pred_mean_0.nii.gz'
test_name = str.replace(scan, "_flair.nii.gz", "") + "_out_pred_mean_0.nii.gz"
out_scan.to_filename(os.path.join(test_folder, test_name))

out_scan = nib.Nifti1Image(var_image, header=header)
test_name = str.replace(scan, '_flair.nii.gz', '') + '_out_pred_var_0.nii.gz'
test_name = str.replace(scan, "_flair.nii.gz", "") + "_out_pred_var_0.nii.gz"
out_scan.to_filename(os.path.join(test_folder, test_name))

# test_folder = str.replace(test_folder, 'brain', 'predictions')
if not os.path.exists(os.path.join(test_folder, options['experiment'])):
os.mkdir(os.path.join(test_folder, options['experiment']))
if not os.path.exists(os.path.join(test_folder, options["experiment"])):
os.mkdir(os.path.join(test_folder, options["experiment"]))

out_scan = nib.Nifti1Image(seg_image, header=header)
#out_scan.to_filename(os.path.join(options['test_folder'], options['test_scan'], options['experiment'], options['test_name']))
test_name = str.replace(scan, '_flair.nii.gz', '') + '_out_pred_0.nii.gz'
# out_scan.to_filename(os.path.join(options['test_folder'], options['test_scan'], options['experiment'], options['test_name']))
test_name = str.replace(scan, "_flair.nii.gz", "") + "_out_pred_0.nii.gz"
out_scan.to_filename(os.path.join(test_folder, test_name))

thresh_image = seg_image.copy()
@@ -101,35 +125,54 @@ def select_voxels_from_previous_model(model, train_x_data, options):
"""
Select training voxels from image segmentation masks
"""
threshold = options['th_dnn_train_2']
threshold = options["th_dnn_train_2"]
# get_scan names
scans = list(train_x_data.keys())
# print(scans)
print(dict(train_x_data[scans[0]]))

# mask = [test_scan_uncertainty(model, dict(train_x_data.items()[s:s+1]), options, intermediate=1, uncertainty=True)[0] > threshold for s in trange(len(scans), desc='sel_vox_prev_model_pred_mean')]
mask = [test_scan_uncertainty(model, dict(train_x_data[scans[s]]), scans[s], options, intermediate=1, uncertainty=True)[0] > threshold for s in trange(len(scans), desc='sel_vox_prev_model_pred_mean')]
mask = [
test_scan_uncertainty(
model,
dict(train_x_data[scans[s]]),
scans[s],
options,
intermediate=1,
uncertainty=True,
)[0]
> threshold
for s in trange(len(scans), desc="sel_vox_prev_model_pred_mean")
]

return mask


def predict_uncertainty(model, data, batch_size, T=10):
input = model.layers[0].input
output = model.layers[-1].output
f_stochastic = K.function([input, K.learning_phase()], output) # instantiates a Keras function.
K.set_image_dim_ordering('th')
K.set_image_data_format('channels_first')

Yt_hat = np.array([predict_stochastic(f_stochastic, data, batch_size=batch_size) for _ in tqdm(xrange(T), ascii=True, desc="predict_stochastic")])
f_stochastic = K.function(
[input, K.learning_phase()], output
) # instantiates a Keras function.
K.set_image_dim_ordering("th")
K.set_image_data_format("channels_first")

Yt_hat = np.array(
[
predict_stochastic(f_stochastic, data, batch_size=batch_size)
for _ in tqdm(xrange(T), ascii=True, desc="predict_stochastic")
]
)
MC_pred = np.mean(Yt_hat, 0)
MC_pred_var = np.var(Yt_hat, 0)

return MC_pred, MC_pred_var


def predict_stochastic(f, ins, batch_size=128, verbose=0):
'''
Abstract method to loop over some data in batches.
'''
"""
Abstract method to loop over some data in batches.
"""
nb_sample = len(ins)
outs = []
if verbose == 1:
145 changes: 145 additions & 0 deletions app/utils/confidence.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
import os

import nibabel as nib
import numpy as np
import pandas as pd
from nibabel import load as load_nii
from nilearn.plotting import find_parcellation_cut_coords
from scipy import ndimage as nd
from sklearn.preprocessing import minmax_scale


def find_center_xyz(labels_np, header):
labels_img = nib.Nifti1Image(labels_np, affine=header.get_qform(), header=header)
coords = find_parcellation_cut_coords(
labels_img, background_label=0, return_label_names=False
)
new_coords = [np.round(coord, 2) for coord in coords[0]]
return new_coords


def get_rank_array(unsorted_array):
order = unsorted_array.argsort()
ranks = order.argsort()
return ranks


def assign_rank_io(
pid,
lesion_fp_bin,
data_prob,
data_var,
header,
options,
pred_labels,
label_list,
):

struct = nd.generate_binary_structure(3, 3)
pred_labels, _ = nd.label(lesion_fp_bin, structure=struct)
label_list = np.unique(pred_labels) # drop the background label

num_elements_by_lesion = nd.labeled_comprehension(
lesion_fp_bin, pred_labels, label_list, np.sum, float, 0
)

tmp = np.zeros_like(lesion_fp_bin)
uncert, prob, coords = [], [], []
# print(num_elements_by_lesion, type(num_elements_by_lesion))
for l in range(len(num_elements_by_lesion)):
if num_elements_by_lesion[l] > options["l_min"]:
# assign voxels to output
current_voxels = np.stack(np.where(pred_labels == l), axis=1)
tmp[current_voxels[:, 0], current_voxels[:, 1], current_voxels[:, 2]] = 1
coord = find_center_xyz(tmp, header)
# print(coord)
coords.append(coord)
uncert.append(np.median(np.ma.masked_equal(tmp * data_var, 0).compressed()))
prob.append(np.median(np.ma.masked_equal(tmp * data_prob, 0).compressed()))
tmp = np.zeros_like(lesion_fp_bin)

# print(1/np.array(uncert).ravel())
conf = 100 * minmax_scale(1 / np.array(uncert).ravel())
conf_sort = 1 + get_rank_array(-conf) # reverse arg_sort() and offset zero rank
# conf_sort
# print(conf_sort)
output_scan = np.zeros_like(lesion_fp_bin)
# les_lab = []
for l in range(1, len(num_elements_by_lesion)):
if num_elements_by_lesion[l] > options["l_min"]:
# assign voxels to output
current_voxels = np.stack(np.where(pred_labels == l), axis=1)
output_scan[
current_voxels[:, 0], current_voxels[:, 1], current_voxels[:, 2]
] = conf_sort[l - 1]

out_img = nib.Nifti1Image(output_scan, affine=header.get_qform(), header=header)
fname = os.path.join(options["data_folder"], str(pid) + "_ranked_image.nii.gz")
nib.save(out_img, fname)
return np.array(prob).ravel(), np.array(conf).ravel(), conf_sort, uncert, coords


def extractLesionCluster(scan, ea, ea_var, options):
ea_orig, ea_var_orig = ea.copy(), ea_var.copy()

submask = options['submask']
if os.path.exists(submask):
submask = load_nii(submask).get_fdata()
else:
submask = np.ones_like(ea)

ea = nd.grey_closing(ea, size=(3, 3, 3))
ea_var = nd.grey_closing(ea_var, size=(3, 3, 3))
ea = ea > options["t_bin"]
output_scan = ea.copy()
# ea = ea*submask

morphed = nd.binary_opening(output_scan, iterations=1)
morphed = nd.binary_fill_holes(morphed, structure=np.ones((5, 5, 5))).astype(int)

morphed = morphed * submask
pred_labels, _ = nd.label(morphed, structure=np.ones((3, 3, 3)))

label_list = np.unique(pred_labels)
num_elements_by_lesion = nd.labeled_comprehension(
morphed, pred_labels, label_list, np.sum, float, 0
)

output_scan = np.zeros_like(morphed)
for l in range(len(num_elements_by_lesion)):
if num_elements_by_lesion[l] > options["l_min"]:
# assign voxels to output
current_voxels = np.stack(np.where(pred_labels == l), axis=1)
output_scan[
current_voxels[:, 0], current_voxels[:, 1], current_voxels[:, 2]
] = 1

pred_labels, _ = nd.label(output_scan, structure=np.ones((3, 3, 3)))
label_list = np.unique(pred_labels)
num_elements_by_lesion = nd.labeled_comprehension(
output_scan, pred_labels, label_list, np.sum, float, 0
)

# assign voxel-level cluster-wise ranks based on confidence
prob, conf, conf_sort, uncert, coords = assign_rank_io(
scan,
output_scan,
ea_orig,
ea_var_orig,
options["header"],
options,
pred_labels,
label_list,
)

stats = {
"probability": prob,
"confidence": conf / 100,
"var": uncert,
"rank": conf_sort,
"id": str(scan),
"coords": coords,
}

stat_df = pd.DataFrame.from_records(stats)
return pred_labels, stat_df
111 changes: 75 additions & 36 deletions app/utils/create_hdf5_patch_dataset.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,57 +1,57 @@
#!/usr/bin/env python
#!/usr/bin/env python3

import os
try:
import h5py
except ImportError:
raise ImportError('install h5py first: `pip install h5py --upgrade`')

import numpy as np

os.environ["KERAS_BACKEND"] = "theano"
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cpu,floatX=float32"
print(os.environ["THEANO_FLAGS"])

import time
import string
from keras import backend as K
# from keras.utils import to_categorical

from utils.metrics import *
# from utils.base import *
from utils.h5data import *
# import utils.h5data as h5d
from h5data import load_training_data, create_dataset

# set configuration parameters
options = {}
options['n_patches'] = 1000
options['seed'] = 666
options['modalities'] = ['T1', 'FLAIR']
options['x_names'] = ['_t1.nii.gz', '_flair.nii.gz']
options['y_names'] = ['_lesion.nii.gz']
options['submask_names'] = ['subcorticalMask_final_negative.nii.gz']
options['patch_size'] = (18,18,18)
options["n_patches"] = 1500
options["seed"] = 666
options["modalities"] = ["T1", "FLAIR"]
options["x_names"] = ["_t1.nii.gz", "_flair.nii.gz"]
options["y_names"] = ["_lesion.nii.gz"]
options["submask_names"] = ["subcorticalMask_final_negative.nii.gz"]
options["patch_size"] = (16, 16, 16)

options['thr'] = 0.1
options['min_th'] = options['thr']
options["thr"] = 0.1
options["min_th"] = options["thr"]

# randomize training features before fitting the model.
options['randomize_train'] = True
options["randomize_train"] = True

modalities = options['modalities']
x_names = options['x_names']
y_names = options['y_names']
modalities = options["modalities"]
x_names = options["x_names"]
y_names = options["y_names"]

seed = options['seed']
seed = options["seed"]
print("seed: {}".format(seed))
# Select an experiment name to store net weights and segmentation masks
options['experiment'] = 'noel_FCDdata_'
options["experiment"] = "noel_FCDdata"

options['model_dir'] = './weights' # weights/noel_dropoutMC_model_{1,2}.h5
options['train_folder'] = '/host/hamlet/local_raid/data/ravnoor/01_Projects/55_Bayesian_DeepLesion_LoSo/data/'
options['data_folder'] = '/host/hamlet/local_raid/data/ravnoorX/data/noel_hdf5'
options["model_dir"] = "./weights" # weights/noel_dropoutMC_model_{1,2}.h5
options["train_folder"] = "/host/hamlet/local_raid/data/ravnoor/01_Projects/55_Bayesian_DeepLesion_LoSo/data/"
# options["data_folder"] = "/host/hamlet/local_raid/data/ravnoorX/data/noel_hdf5"
options["data_folder"] = "/tmp/noel_hdf5"

list_of_train_scans = os.listdir(options['train_folder']+'brain')
list_of_train_scans = os.listdir(options["train_folder"] + "brain")
include_train = list(set(list_of_train_scans))

modality = [x.lower() for x in modalities]

for m in modality:
include_train = [f.replace('_'+m+'.nii.gz', '') for f in include_train]
include_train = [f.replace("_" + m + ".nii.gz", "") for f in include_train]
include_train = list(set(include_train))

print("training dataset size: {}".format(len(include_train)))
@@ -60,24 +60,63 @@

for scan in include_train:
# load paths to all the data
train_x_data = {f: {m: os.path.join(options['train_folder'], 'brain', f+n) for m, n in zip(modality, x_names)} for f in include_train}
train_y_data = {f: os.path.join(options['train_folder'], 'lesion_labels', f+y_names[0]) for f in include_train}

print("\nconverting 3D MRI to patch-based dataset with {} patches of size: {}".format(options['n_patches'], options['patch_size']))
train_x_data = {
f: {
m: os.path.join(options["train_folder"], "brain", f + n)
for m, n in zip(modality, x_names)
}
for f in include_train
}
train_y_data = {
f: os.path.join(options["train_folder"], "lesion_labels", f + y_names[0])
for f in include_train
}

print(
"\nconverting 3D MRI to patch-based dataset with {} patches of size: {}".format(
options["n_patches"], options["patch_size"]
)
)

start = time.time()

X, y = load_training_data(train_x_data, train_y_data, options=options, subcort_masks=None)
X, y = load_training_data(
train_x_data, train_y_data, options=options, subcort_masks=None
)
# y = to_categorical(Y, num_classes=2)

print("\ndata_shape: {}, {}".format(X.shape, y.shape))

h5_fname = options['experiment'] + '_N_patches_' + str(options['n_patches']) + '_patchsize_' + str(options['patch_size'][0]) + '_iso.h5'
datapath = os.path.join(options['data_folder'], h5_fname)
h5_fname = (
options["experiment"]
+ "_N_patches_"
+ str(options["n_patches"])
+ "_patchsize_"
+ str(options["patch_size"][0])
+ "_iso_fix.h5"
)

print(np.histogram(y, bins=2))

datapath = os.path.join(options["data_folder"], h5_fname)
print("\nhdf5 dataset is being created: {}".format(datapath))

create_dataset(datapath, X, y)

end = time.time()
diff = end - start
print("time elapsed: ~ {} minutes".format(diff // 60))

# validate the newly created dataset
print("\nhdf5 dataset is being loaded: {}".format(datapath))

# sample hdf5 dataset available from https://doi.org/10.5281/zenodo.3239446
with h5py.File(datapath, "r") as f:
X = f['data'][:].astype('f')
y = f['labels'][:].astype('i')

# output the shape of the patches and labels
print(X.shape, y.shape)

# should output equal number of positive and negative examples (0/1)
print(np.histogram(y, bins=2))
238 changes: 180 additions & 58 deletions app/utils/h5data.py
Original file line number Diff line number Diff line change
@@ -1,36 +1,62 @@
import numpy as np
from operator import add, itemgetter

import h5py
import numpy as np
from nibabel import load as load_nii
from scipy.ndimage import binary_dilation
from tqdm import tqdm
from tqdm.contrib import tzip
from scipy.ndimage import binary_dilation
from nibabel import load as load_nii
from operator import itemgetter, add
from .patch_dataloader import binarize_label_gm, select_voxels_from_previous_model

from patch_dataloader import (binarize_label_gm,
select_voxels_from_previous_model)


def create_dataset(data_path, X, y):
"""
"""
Load train patches with size equal to patch_size, given a list of selected voxels
Inputs:
- X: training X data matrix for the particular channel [num_samples, p1, p2, p3]
- y: training y labels [num_samples,]
Outputs:
- data_path: compressed HDF5 dataset with X and y
"""
with h5py.File(data_path, 'w') as f:
with h5py.File(data_path, "w") as f:
# f = h5py.File(data_path, 'w')
# Creating dataset to store features
X_dset = f.create_dataset('data', X.shape, dtype='f', compression="gzip", compression_opts=9, shuffle=True)
X_dset = f.create_dataset(
"data",
X.shape,
dtype="f",
compression="gzip",
compression_opts=9,
shuffle=True,
)
X_dset[:] = X
# Creating dataset to store labels
y_dset = f.create_dataset('labels', y.shape, dtype='i', compression="gzip", compression_opts=9, shuffle=True)
y_dset = f.create_dataset(
"labels",
y.shape,
dtype="i8",
compression="gzip",
compression_opts=9,
shuffle=True,
)
y_dset[:] = y
# f.close()


def load_train_patches(x_data, y_data, selected_voxels, patch_size, subcort_masks=None, n_patches=1000, seed=666, datatype=np.float32):
def load_train_patches(
x_data,
y_data,
selected_voxels,
patch_size,
subcort_masks=None,
n_patches=1000,
seed=666,
datatype=np.float32,
):
"""
Load train patches with size equal to patch_size, given a list of selected voxels
@@ -46,50 +72,122 @@ def load_train_patches(x_data, y_data, selected_voxels, patch_size, subcort_mask
"""

# load images and normalize their intensties
images = [load_nii(name).get_data() for name in tqdm(x_data, desc="loading MRI images")]
images_norm = [(im.astype(dtype=datatype) - im[np.nonzero(im)].mean()) / im[np.nonzero(im)].std() for im in tqdm(images, desc="normalize MRI intensities")]
images = [
load_nii(name).get_fdata() for name in tqdm(x_data, desc="loading MRI images")
]
images_norm = [
(im.astype(dtype=datatype) - im[np.nonzero(im)].mean())
/ im[np.nonzero(im)].std()
for im in tqdm(images, desc="normalize MRI intensities")
]
del images

# load labels
lesion_masks = [binarize_label_gm(load_nii(name).get_data()) for name in tqdm(y_data, desc="loading lesion labels")] # preserve only the GM component, ignore, WM and transmantle sign
lesion_masks = [
binarize_label_gm(load_nii(name).get_fdata())
for name in tqdm(y_data, desc="loading lesion labels")
] # preserve only the GM component, ignore, WM and transmantle sign

# load subcortical masks to exclude these voxels from training
if subcort_masks is not None:
submasks = [load_nii(name).get_data() for name in tqdm(subcort_masks, desc="load subcortical masks")]
nolesion_masks = [np.logical_and(np.logical_not(lesion), submask, brain) for lesion, submask, brain in tzip(lesion_masks, submasks, selected_voxels, desc="extract nonlesional masks")]
submasks = [
load_nii(name).get_fdata()
for name in tqdm(subcort_masks, desc="load subcortical masks")
]
nolesion_masks = [
np.logical_and(np.logical_not(lesion), submask, brain)
for lesion, submask, brain in tzip(
lesion_masks,
submasks,
selected_voxels,
desc="extract nonlesional masks",
)
]
del submasks
else:
nolesion_masks = [np.logical_and(np.logical_not(binary_dilation(lesion, iterations=5)), brain) for lesion, brain in tzip(lesion_masks, selected_voxels, desc="extract nonlesional masks")]

nolesion_masks = [
np.logical_and(np.logical_not(binary_dilation(lesion, iterations=5)), brain)
for lesion, brain in tzip(
lesion_masks, selected_voxels, desc="extract nonlesional masks"
)
]

# lesional_vox = 0
# for lesion in lesion_masks:
# lesion_size = np.sum(lesion)
# lesional_vox += lesion_size
# if lesion_size < 1000:
# print("\nlesion_size: {}".format(lesion_size))
# print("\ntotal lesional voxels: {}".format(lesional_vox))

# Get all the x,y,z coordinates for each image
lesion_centers = [get_mask_voxels(mask) for mask in tqdm(lesion_masks, desc="extract lesional coords")]
nolesion_centers = [get_mask_voxels(mask) for mask in tqdm(nolesion_masks, desc="extract nonlesional coords")]
lesion_centers = [
get_mask_voxels(mask)
for mask in tqdm(lesion_masks, desc="extract lesional coords")
]
nolesion_centers = [
get_mask_voxels(mask)
for mask in tqdm(nolesion_masks, desc="extract nonlesional coords")
]
del nolesion_masks

# load all positive samples (lesional voxels) up to a maximum of n_patches
np.random.seed(seed)
indices = [np.random.permutation(range(0, len(center_les))).tolist()[:min(n_patches, len(center_les))] for center_les in lesion_centers]

lesion_small = [itemgetter(*idx)(centers) for centers, idx in zip(nolesion_centers, indices)]
x_pos_patches = [np.array(get_patches(image, centers, patch_size)) for image, centers in zip(images_norm, lesion_small)]
y_pos_patches = [np.array(get_patches(image, centers, patch_size)) for image, centers in zip(lesion_masks, lesion_small)]
indices = [
np.random.permutation(range(0, len(center_les))).tolist()[
: min(n_patches, len(center_les))
]
for center_les in lesion_centers
]

lesion_small = [
itemgetter(*idx)(centers) for centers, idx in zip(nolesion_centers, indices)
]
x_pos_patches = [
np.array(get_patches(image, centers, patch_size))
for image, centers in tzip(images_norm, lesion_small, desc="extract positive patches")
]
# y_pos_patches = [
# np.array(get_patches(image, centers, patch_size))
# for image, centers in tzip(lesion_masks, lesion_small, desc="extract positive patch labels")
# ]

# load as many random negatives (non-lesions) samples as positive (lesions) samples
indices = [np.random.permutation(range(0, len(center_no_les))).tolist()[:min(n_patches, len(center_les))] for center_no_les, center_les in zip(nolesion_centers, lesion_centers)]

nolesion_small = [itemgetter(*idx)(centers) for centers, idx in zip(nolesion_centers, indices)]
x_neg_patches = [np.array(get_patches(image, centers, patch_size)) for image, centers in zip(images_norm, nolesion_small)]
y_neg_patches = [np.array(get_patches(image, centers, patch_size)) for image, centers in zip(lesion_masks, nolesion_small)]
indices = [
np.random.permutation(range(0, len(center_no_les))).tolist()[
: min(n_patches, len(center_les))
]
for center_no_les, center_les in zip(nolesion_centers, lesion_centers)
]

nolesion_small = [
itemgetter(*idx)(centers) for centers, idx in zip(nolesion_centers, indices)
]
x_neg_patches = [
np.array(get_patches(image, centers, patch_size))
for image, centers in tzip(images_norm, nolesion_small, desc="extract negative patches")
]
# y_neg_patches = [
# np.array(get_patches(image, centers, patch_size))
# for image, centers in tzip(lesion_masks, nolesion_small, desc="extract negative patch labels")
# ]

# concatenate positive and negative patches for each subject
X = np.concatenate([np.concatenate([x1, x2]) for x1, x2 in zip(x_pos_patches, x_neg_patches)], axis=0)
Y = np.concatenate([np.concatenate([y1, y2]) for y1, y2 in zip(y_pos_patches, y_neg_patches)], axis=0)
X = np.concatenate(
[np.concatenate([x1, x2]) for x1, x2 in zip(x_pos_patches, x_neg_patches)], axis=0
)
# Y = np.concatenate(
# [np.concatenate([y1, y2]) for y1, y2 in zip(y_pos_patches, y_neg_patches)], axis=0
# )
Y = np.concatenate(
[np.concatenate([np.ones(y1.shape[0]), np.zeros(y2.shape[0])]) for y1, y2 in zip(x_pos_patches, x_neg_patches)], axis=0
)

return X, Y


def load_training_data(train_x_data, train_y_data, options, subcort_masks, model=None):
'''
"""
Load training and label samples for all given scans and modalities.
Inputs:
@@ -111,7 +209,7 @@ def load_training_data(train_x_data, train_y_data, options, subcort_masks, model
- X: np.array [num_samples, num_channels, p1, p2, p2]
- Y: np.array [num_samples, 1]
'''
"""

# get_scan names and number of modalities used
scans = list(train_x_data.keys())
@@ -121,10 +219,12 @@ def load_training_data(train_x_data, train_y_data, options, subcort_masks, model
# if no model is passed, training samples are extracted by discarding the CSF and darker WM in FLAIR, and use all remaining voxels.
# if model is passed, use the trained model to extract all voxels with probability > 0.1
if model is None:
flair_scans = [train_x_data[s]['FLAIR'] for s in scans]
selected_voxels = select_training_voxels(flair_scans, options['min_th'])
flair_scans = [train_x_data[s]["FLAIR"] for s in scans]
selected_voxels = select_training_voxels(flair_scans, options["min_th"])
else:
selected_voxels = select_voxels_from_previous_model(model, train_x_data, options)
selected_voxels = select_voxels_from_previous_model(
model, train_x_data, options
)

# extract patches and labels for each of the modalities
data = []
@@ -134,33 +234,43 @@ def load_training_data(train_x_data, train_y_data, options, subcort_masks, model
y_data = [train_y_data[s] for s in scans]
if subcort_masks is not None:
submasks = [subcort_masks[s] for s in scans]
x_patches, y_patches = load_train_patches(x_data, y_data, selected_voxels, options['patch_size'], submasks, n_patches=options['n_patches'])
x_patches, y_patches = load_train_patches(
x_data,
y_data,
selected_voxels,
options["patch_size"],
submasks,
n_patches=options["n_patches"],
)
else:
x_patches, y_patches = load_train_patches(x_data, y_data, selected_voxels, options['patch_size'], subcort_masks=None, n_patches=options['n_patches'])
x_patches, y_patches = load_train_patches(
x_data,
y_data,
selected_voxels,
options["patch_size"],
subcort_masks=None,
n_patches=options["n_patches"],
)
print("{} shape: {}".format(m, x_patches.shape))
data.append(x_patches)
# stack patches along the channels' dimension [samples, channels, p1, p2, p3]
X = np.stack(data, axis = 1)
Y = y_patches
print(X.shape, Y.shape)
X = np.stack(data, axis=1)
y = y_patches

# apply randomization if selected
if options['randomize_train']:
if options["randomize_train"]:
seed = np.random.randint(np.iinfo(np.int32).max)
np.random.seed(seed)
X = np.random.permutation(X.astype(dtype=np.float32))
np.random.seed(seed)
Y = np.random.permutation(Y.astype(dtype=np.int32))

# Y = [num_samples, p1, p2, p3]
Y = Y[:, Y.shape[1] // 2, Y.shape[2] // 2, Y.shape[3] // 2]

Y = np.squeeze(Y)
Y = np.random.permutation(y.astype(dtype=np.int8))
else:
X = X.astype(dtype=np.float32)
Y = y.astype(dtype=np.int8)

return X, Y



def get_mask_voxels(mask):
"""
Compute x,y,z coordinates of a binary mask
@@ -188,12 +298,18 @@ def get_patches(image, centers, patch_size=(16, 16, 16)):
sizes_match = [len(center) == len(patch_size) for center in centers]

if list_of_tuples and sizes_match:
patch_half = tuple([idx//2 for idx in patch_size])
patch_half = tuple([idx // 2 for idx in patch_size])
new_centers = [map(add, center, patch_half) for center in centers]
# padding = tuple((np.int(idx), np.int(size)-np.int(idx)) for idx, size in zip(patch_half, patch_size))
padding = tuple((idx, size-idx) for idx, size in zip(patch_half, patch_size))
new_image = np.pad(image, padding, mode='constant', constant_values=0)
slices = [[slice(c_idx-p_idx, c_idx+(s_idx-p_idx)) for (c_idx, p_idx, s_idx) in zip(center, patch_half, patch_size)] for center in new_centers]
padding = tuple((idx, size - idx) for idx, size in zip(patch_half, patch_size))
new_image = np.pad(image, padding, mode="constant", constant_values=0)
slices = [
[
slice(c_idx - p_idx, c_idx + (s_idx - p_idx))
for (c_idx, p_idx, s_idx) in zip(center, patch_half, patch_size)
]
for center in new_centers
]
# patches = [new_image[idx] for idx in slices]
patches = [new_image[tuple(idx)] for idx in slices]

@@ -213,17 +329,23 @@ def select_training_voxels(input_masks, threshold=0.1, datatype=np.float32, t1=0
"""

# load images and normalize their intensities
images = [load_nii(image_name).get_data() for image_name in input_masks]
images_norm = [(im.astype(dtype=datatype) - im[np.nonzero(im)].mean()) / im[np.nonzero(im)].std() for im in images]
images = [load_nii(image_name).get_fdata() for image_name in input_masks]
images_norm = [
(im.astype(dtype=datatype) - im[np.nonzero(im)].mean())
/ im[np.nonzero(im)].std()
for im in images
]

# select voxels with intensity higher than threshold
rois = [image > threshold for image in tqdm(images_norm, desc="extract sampling masks")]
rois = [
image > threshold for image in tqdm(images_norm, desc="extract sampling masks from FLAIR thresholding")
]
return rois


def binarize_label_gm(mask):
# discard labels wm (2) and transmantle sign (6)
mask_ = np.zeros_like(mask)
tmp = np.stack(np.where(mask == 1), axis=1)
mask_[tmp[:,0], tmp[:,1], tmp[:,2]] = 1
mask_[tmp[:, 0], tmp[:, 1], tmp[:, 2]] = 1
return mask_.astype(np.bool)
2 changes: 1 addition & 1 deletion app/utils/helpers.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
def bool2str(v):
bn = 1 if str(v).lower() in ["yes", "true", "t", "1"] else 0
return str(bn)
return str(bn)
156 changes: 85 additions & 71 deletions app/utils/keras_bayes_utils.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
import collections
import sys
import time

import numpy as np
import sys, time, collections
from keras import backend as K


def batch_shuffle(index_array, batch_size):
"""Shuffles an array in a batch-wise fashion.
@@ -14,8 +19,8 @@ def batch_shuffle(index_array, batch_size):
batch_count = int(len(index_array) / batch_size)
# to reshape we need to be cleanly divisible by batch size
# we stash extra items and reappend them after shuffling
last_batch = index_array[batch_count * batch_size:]
index_array = index_array[:batch_count * batch_size]
last_batch = index_array[batch_count * batch_size :]
index_array = index_array[: batch_count * batch_size]
index_array = index_array.reshape((batch_count, batch_size))
np.random.shuffle(index_array)
index_array = index_array.flatten()
@@ -31,14 +36,12 @@ def make_batches(size, batch_size):
A list of tuples of array indices.
"""
num_batches = (size + batch_size - 1) // batch_size # round up
return [(i * batch_size, min(size, (i + 1) * batch_size))
for i in range(num_batches)]
return [
(i * batch_size, min(size, (i + 1) * batch_size)) for i in range(num_batches)
]


def check_num_samples(ins,
batch_size=None,
steps=None,
steps_name='steps'):
def check_num_samples(ins, batch_size=None, steps=None, steps_name="steps"):
"""Checks the number of samples provided for training and evaluation.
The number of samples is not defined when running with `steps`,
in which case the number of samples is set to `None`.
@@ -63,20 +66,20 @@ def check_num_samples(ins,
ValueError: In case of invalid arguments.
"""
if steps is not None and batch_size is not None:
raise ValueError(
'If ' + steps_name + ' is set, the `batch_size` must be None.')
raise ValueError("If " + steps_name + " is set, the `batch_size` must be None.")

if not ins or any(K.is_tensor(x) for x in ins):
if steps is None:
raise ValueError(
'If your data is in the form of symbolic tensors, '
'you should specify the `' + steps_name + '` argument '
'(instead of the `batch_size` argument, '
'because symbolic tensors are expected to produce '
'batches of input data).')
"If your data is in the form of symbolic tensors, "
"you should specify the `" + steps_name + "` argument "
"(instead of the `batch_size` argument, "
"because symbolic tensors are expected to produce "
"batches of input data)."
)
return None

if hasattr(ins[0], 'shape'):
if hasattr(ins[0], "shape"):
return int(ins[0].shape[0])
return None # Edge case where ins == [static_learning_phase]

@@ -94,8 +97,9 @@ class Progbar(object):
interval: Minimum visual progress update interval (in seconds).
"""

def __init__(self, target, width=30, verbose=1, interval=0.05,
stateful_metrics=None):
def __init__(
self, target, width=30, verbose=1, interval=0.05, stateful_metrics=None
):
self.target = target
self.width = width
self.verbose = verbose
@@ -105,9 +109,9 @@ def __init__(self, target, width=30, verbose=1, interval=0.05,
else:
self.stateful_metrics = set()

self._dynamic_display = ((hasattr(sys.stdout, 'isatty') and
sys.stdout.isatty()) or
'ipykernel' in sys.modules)
self._dynamic_display = (
hasattr(sys.stdout, "isatty") and sys.stdout.isatty()
) or "ipykernel" in sys.modules
self._total_width = 0
self._seen_so_far = 0
self._values = collections.OrderedDict()
@@ -128,11 +132,13 @@ def update(self, current, values=None):
for k, v in values:
if k not in self.stateful_metrics:
if k not in self._values:
self._values[k] = [v * (current - self._seen_so_far),
current - self._seen_so_far]
self._values[k] = [
v * (current - self._seen_so_far),
current - self._seen_so_far,
]
else:
self._values[k][0] += v * (current - self._seen_so_far)
self._values[k][1] += (current - self._seen_so_far)
self._values[k][1] += current - self._seen_so_far
else:
# Stateful metrics output a numeric value. This representation
# means "take an average from a single value" but keeps the
@@ -141,35 +147,38 @@ def update(self, current, values=None):
self._seen_so_far = current

now = time.time()
info = ' - %.0fs' % (now - self._start)
info = " - %.0fs" % (now - self._start)
if self.verbose == 1:
if (now - self._last_update < self.interval and
self.target is not None and current < self.target):
if (
now - self._last_update < self.interval
and self.target is not None
and current < self.target
):
return

prev_total_width = self._total_width
if self._dynamic_display:
sys.stdout.write('\b' * prev_total_width)
sys.stdout.write('\r')
sys.stdout.write("\b" * prev_total_width)
sys.stdout.write("\r")
else:
sys.stdout.write('\n')
sys.stdout.write("\n")

if self.target is not None:
numdigits = int(np.floor(np.log10(self.target))) + 1
barstr = '%%%dd/%d [' % (numdigits, self.target)
barstr = "%%%dd/%d [" % (numdigits, self.target)
bar = barstr % current
prog = float(current) / self.target
prog_width = int(self.width * prog)
if prog_width > 0:
bar += ('=' * (prog_width - 1))
bar += "=" * (prog_width - 1)
if current < self.target:
bar += '>'
bar += ">"
else:
bar += '='
bar += ('.' * (self.width - prog_width))
bar += ']'
bar += "="
bar += "." * (self.width - prog_width)
bar += "]"
else:
bar = '%7d/Unknown' % current
bar = "%7d/Unknown" % current

self._total_width = len(bar)
sys.stdout.write(bar)
@@ -181,55 +190,56 @@ def update(self, current, values=None):
if self.target is not None and current < self.target:
eta = time_per_unit * (self.target - current)
if eta > 3600:
eta_format = ('%d:%02d:%02d' %
(eta // 3600, (eta % 3600) // 60, eta % 60))
eta_format = "%d:%02d:%02d" % (
eta // 3600,
(eta % 3600) // 60,
eta % 60,
)
elif eta > 60:
eta_format = '%d:%02d' % (eta // 60, eta % 60)
eta_format = "%d:%02d" % (eta // 60, eta % 60)
else:
eta_format = '%ds' % eta
eta_format = "%ds" % eta

info = ' - ETA: %s' % eta_format
info = " - ETA: %s" % eta_format
else:
if time_per_unit >= 1:
info += ' %.0fs/step' % time_per_unit
info += " %.0fs/step" % time_per_unit
elif time_per_unit >= 1e-3:
info += ' %.0fms/step' % (time_per_unit * 1e3)
info += " %.0fms/step" % (time_per_unit * 1e3)
else:
info += ' %.0fus/step' % (time_per_unit * 1e6)
info += " %.0fus/step" % (time_per_unit * 1e6)

for k in self._values:
info += ' - %s:' % k
info += " - %s:" % k
if isinstance(self._values[k], list):
avg = np.mean(
self._values[k][0] / max(1, self._values[k][1]))
avg = np.mean(self._values[k][0] / max(1, self._values[k][1]))
if abs(avg) > 1e-3:
info += ' %.4f' % avg
info += " %.4f" % avg
else:
info += ' %.4e' % avg
info += " %.4e" % avg
else:
info += ' %s' % self._values[k]
info += " %s" % self._values[k]

self._total_width += len(info)
if prev_total_width > self._total_width:
info += (' ' * (prev_total_width - self._total_width))
info += " " * (prev_total_width - self._total_width)

if self.target is not None and current >= self.target:
info += '\n'
info += "\n"

sys.stdout.write(info)
sys.stdout.flush()

elif self.verbose == 2:
if self.target is None or current >= self.target:
for k in self._values:
info += ' - %s:' % k
avg = np.mean(
self._values[k][0] / max(1, self._values[k][1]))
info += " - %s:" % k
avg = np.mean(self._values[k][0] / max(1, self._values[k][1]))
if avg > 1e-3:
info += ' %.4f' % avg
info += " %.4f" % avg
else:
info += ' %.4e' % avg
info += '\n'
info += " %.4e" % avg
info += "\n"

sys.stdout.write(info)
sys.stdout.flush()
@@ -286,39 +296,42 @@ def slice_arrays(arrays, start=None, stop=None):
if arrays is None:
return [None]
elif isinstance(arrays, list):
if hasattr(start, '__len__'):
if hasattr(start, "__len__"):
# hdf5 datasets only support list objects as indices
if hasattr(start, 'shape'):
if hasattr(start, "shape"):
start = start.tolist()
return [None if x is None else x[start] for x in arrays]
else:
return [None if x is None else x[start:stop] for x in arrays]
else:
if hasattr(start, '__len__'):
if hasattr(start, 'shape'):
if hasattr(start, "__len__"):
if hasattr(start, "shape"):
start = start.tolist()
return arrays[start]
elif hasattr(start, '__getitem__'):
elif hasattr(start, "__getitem__"):
return arrays[start:stop]
else:
return [None]


def slice_X(X, start=None, stop=None):
if type(X) == list:
if hasattr(start, '__len__'):
if hasattr(start, "__len__"):
return [x[start] for x in X]
else:
return [x[start:stop] for x in X]
else:
if hasattr(start, '__len__'):
if hasattr(start, "__len__"):
return X[start]
else:
return X[start:stop]


# def make_batches(size, batch_size):
# nb_batch = int(np.ceil(size/float(batch_size)))
# return [(i*batch_size, min(size, (i+1)*batch_size)) for i in range(0, nb_batch)]


def numpy_minibatch(numpy_array, batch_size=1, min_batch_size=1):
"""
Creates a minibatch generator over a numpy array. :func:`minibatch` delegates to this generator
@@ -339,12 +352,13 @@ def numpy_minibatch(numpy_array, batch_size=1, min_batch_size=1):
A numpy array of the minibatch. It will yield over the first dimension of the input.
"""
numpy_array = np.asarray(numpy_array)
assert 0 < min_batch_size <= batch_size, \
"batch_size (%d) has to be larger than min_batch_size (%d) and they both have to be greater than zero!" % \
(batch_size, min_batch_size)
assert 0 < min_batch_size <= batch_size, (
"batch_size (%d) has to be larger than min_batch_size (%d) and they both have to be greater than zero!"
% (batch_size, min_batch_size)
)
# go through the first dimension of the input array.
for i in iter(range((numpy_array.shape[0] // batch_size) + 1)):
idx = i * batch_size
data = numpy_array[idx:(idx + batch_size)]
data = numpy_array[idx : (idx + batch_size)]
if data.shape[0] >= min_batch_size:
yield data
31 changes: 16 additions & 15 deletions app/utils/metrics.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import numpy as np


def dc(im1, im2):
"""
dice coefficient 2nt/na + nb.
@@ -12,12 +13,12 @@ def dc(im1, im2):

im_sum = im1.sum() + im2.sum()
if im_sum == 0:
return empty_score
return 1

# Compute Dice coefficient
intersection = np.logical_and(im1, im2)

dc = 2. * intersection.sum() / im_sum
dc = 2.0 * intersection.sum() / im_sum

return dc

@@ -57,21 +58,21 @@ def perf_measure_vox(y_pred, y_true):
FP = np.sum(np.logical_and(y_pred == 1, y_true == 0))
FN = np.sum(np.logical_and(y_pred == 0, y_true == 1))

sensitivity = 100*TP/(TP+FN)
specificity = 100*TN/(TN+FP)
sensitivity = 100 * TP / (TP + FN)
specificity = 100 * TN / (TN + FP)

print('-'*60)
print("sensitivity: %.2f" %(sensitivity))
print("specificity: %.2f" %(specificity))
print('-'*60)
print("-" * 60)
print("sensitivity: %.2f" % (sensitivity))
print("specificity: %.2f" % (specificity))
print("-" * 60)

perf = {
'sensitivity': sensitivity,
'specificity': specificity,
'TP': TP,
'FP': FP,
'TN': TN,
'FN': FN,
}
"sensitivity": sensitivity,
"specificity": specificity,
"TP": TP,
"FP": FP,
"TN": TN,
"FN": FN,
}

return perf
257 changes: 194 additions & 63 deletions app/utils/patch_dataloader.py

Large diffs are not rendered by default.

98 changes: 60 additions & 38 deletions app/utils/post_processor.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import os, subprocess
import os

import nibabel as nib
import numpy as np
from utils.metrics import *
from sklearn.metrics import cohen_kappa_score
from scipy import ndimage as nd
from scipy.ndimage import binary_opening
import nibabel as nib
from sklearn.metrics import cohen_kappa_score

from utils.metrics import *


def post_processing(input_scan, options, header, save_nifti=True):
@@ -21,9 +23,9 @@ def post_processing(input_scan, options, header, save_nifti=True):
output:
- output_scan: final binarized segmentation
"""
t_bin = options['t_bin']
t_bin = options["t_bin"]
# t_bin = 0
l_min = options['l_min']
l_min = options["l_min"]
output_scan = np.zeros_like(input_scan)
labels_scan = np.zeros_like(input_scan)

@@ -32,35 +34,46 @@ def post_processing(input_scan, options, header, save_nifti=True):
# perform morphological operations (dilation of the erosion of the input)
morphed = binary_opening(t_segmentation, iterations=1)
# label connected components
morphed = nd.binary_fill_holes(morphed, structure=np.ones((5,5,5))).astype(int)
pred_labels, _ = nd.label(morphed, structure=np.ones((3,3,3)))
morphed = nd.binary_fill_holes(morphed, structure=np.ones((5, 5, 5))).astype(int)
pred_labels, _ = nd.label(morphed, structure=np.ones((3, 3, 3)))
label_list = np.unique(pred_labels)
num_elements_by_lesion = nd.labeled_comprehension(morphed, pred_labels, label_list, np.sum, float, 0)
num_elements_by_lesion = nd.labeled_comprehension(
morphed, pred_labels, label_list, np.sum, float, 0
)

# filter candidates by size and store those > l_min
for l in range(len(num_elements_by_lesion)):
if num_elements_by_lesion[l]>l_min:
if num_elements_by_lesion[l] > l_min:
# assign voxels to output
current_voxels = np.stack(np.where(pred_labels == l), axis=1)
output_scan[current_voxels[:,0], current_voxels[:,1], current_voxels[:,2]] = 1
output_scan[
current_voxels[:, 0], current_voxels[:, 1], current_voxels[:, 2]
] = 1

for l in range(len(num_elements_by_lesion)):
if num_elements_by_lesion[l]>l_min:
if num_elements_by_lesion[l] > l_min:
# assign voxels to output
current_voxels = np.stack(np.where(pred_labels == l), axis=1)
labels_scan[current_voxels[:,0], current_voxels[:,1], current_voxels[:,2]] = num_elements_by_lesion[l].astype(np.int)

labels_scan[
current_voxels[:, 0], current_voxels[:, 1], current_voxels[:, 2]
] = label_list[l]

count = np.count_nonzero(num_elements_by_lesion.astype(dtype=np.int) > l_min)

options['test_morph_name'] = options['experiment'] + '_' + options['test_scan'] + '_out_morph_labels.nii.gz'
options["test_morph_name"] = (
options["experiment"] + "_" + options["test_scan"] + "_out_morph_labels.nii.gz"
)

#save the output segmentation as nifti
# save the output segmentation as nifti
if save_nifti:
nii_out = nib.Nifti1Image(output_scan, affine=header.get_qform(), header=header)
nii_out.to_filename(os.path.join(options['pred_folder'], options['test_name']))
labels_out = nib.Nifti1Image(labels_scan, affine=header.get_qform(), header=header)
labels_out.to_filename(os.path.join(options['pred_folder'], options['test_morph_name']))
nii_out.to_filename(os.path.join(options["pred_folder"], options["test_name"]))
labels_out = nib.Nifti1Image(
labels_scan, affine=header.get_qform(), header=header
)
labels_out.to_filename(
os.path.join(options["pred_folder"], options["test_morph_name"])
)
return output_scan, pred_labels, count


@@ -69,9 +82,9 @@ def extract_lesional_clus(label, input_scan, scan, options):
find cluster components in the prediction
corresponding to the true label cluster
"""
t_bin = options['t_bin']
t_bin = options["t_bin"]
# t_bin = 0
l_min = options['l_min']
l_min = options["l_min"]
output_scan = np.zeros_like(input_scan)

# threshold input segmentation
@@ -82,10 +95,12 @@ def extract_lesional_clus(label, input_scan, scan, options):
morphed = binary_opening(t_segmentation, iterations=1)
# morphed = t_segmentation
# label connected components
morphed = nd.binary_fill_holes(morphed, structure=np.ones((5,5,5))).astype(int)
pred_labels, _ = nd.label(morphed, structure=np.ones((3,3,3)))
morphed = nd.binary_fill_holes(morphed, structure=np.ones((5, 5, 5))).astype(int)
pred_labels, _ = nd.label(morphed, structure=np.ones((3, 3, 3)))
label_list = np.unique(pred_labels)
num_elements_by_lesion = nd.labeled_comprehension(morphed, pred_labels, label_list, np.sum, float, 0)
num_elements_by_lesion = nd.labeled_comprehension(
morphed, pred_labels, label_list, np.sum, float, 0
)

Y = np.zeros((len(num_elements_by_lesion > l_min)))
for l in range(len(num_elements_by_lesion > l_min)):
@@ -97,28 +112,35 @@ def extract_lesional_clus(label, input_scan, scan, options):
lesion_pred[lesion_pred == clus_ind] = 1

lesion_pred_out = nib.Nifti1Image(lesion_pred, np.eye(4))
options['test_lesion_pred'] = options['experiment'] + '_' + options['test_scan'] + '_out_lesion_pred_only.nii.gz'
lesion_pred_out.to_filename(os.path.join(options['pred_folder'], options['test_lesion_pred']))
options["test_lesion_pred"] = (
options["experiment"]
+ "_"
+ options["test_scan"]
+ "_out_lesion_pred_only.nii.gz"
)
lesion_pred_out.to_filename(
os.path.join(options["pred_folder"], options["test_lesion_pred"])
)
return lesion_pred


def performancer(perf, scan, test, label, lesion_pred, count):
perf[scan] = perf_measure_vox(test.flatten(), label.flatten())
perf[scan]['accuracy'] = accuracy_score(label.flatten(), test.flatten())
perf[scan]['kappa'] = cohen_kappa_score(label.flatten(), lesion_pred.flatten())
perf[scan]["accuracy"] = accuracy_score(label.flatten(), test.flatten())
perf[scan]["kappa"] = cohen_kappa_score(label.flatten(), lesion_pred.flatten())
# perf[scan]['jaccard'] = jc(label.flatten(), lesion_pred.flatten())
perf[scan]['dice_coef'] = dc(lesion_pred, label)
perf[scan]["dice_coef"] = dc(lesion_pred, label)

cm = confusion_matrix(label.flatten(), test.flatten(), (1,0))
cm_norm = 100*cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm = confusion_matrix(label.flatten(), test.flatten(), (1, 0))
cm_norm = 100 * cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
print(cm_norm.astype(int))

print('-'*70)
print("dice coefficient: %.4f " %(perf[scan]['dice_coef']))
print('-'*70)
print("-" * 70)
print("dice coefficient: %.4f " % (perf[scan]["dice_coef"]))
print("-" * 70)

print('-'*70)
perf[scan]['clusters'] = count
print("no. of clusters (lesional + extra-lesional): %i " %(count))
print('-'*70)
print("-" * 70)
perf[scan]["clusters"] = count
print("no. of clusters (lesional + extra-lesional): %i " % (count))
print("-" * 70)
return perf
20 changes: 20 additions & 0 deletions app/utils/read_h5data.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#!/usr/bin/env python3

try:
import h5py
except ImportError:
raise ImportError('install h5py first: `pip install h5py --upgrade`')

import numpy as np

h5file = 'noel_FCDdata_N_patches_1000_patchsize_16_iso_fix.h5'
# h5file available from https://doi.org/10.5281/zenodo.3239446
f = h5py.File(h5file, 'r')

with h5py.File(h5file, "r") as f:
X = f['data'][:].astype('f')
y = f['labels'][:].astype('i8')

print(X.shape, y.shape)

print(np.histogram(y, bins=2))
78 changes: 78 additions & 0 deletions app/utils/reporting.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
#!/usr/bin/env python
# coding: utf-8

'''Rank clusters based on probability/size thresholding and uncertainty,
and prints output
Usage:
conda activate deepFCD
python3 reporting.py ${PATIENT_ID} ${IO_DIRECTORY}
'''

import os
import sys

import nibabel as nib
import numpy as np
from nibabel import load as load_nii
from sklearn import preprocessing
from tabulate import tabulate

from atlasreader.atlasreader import read_atlas_peak
from confidence import extractLesionCluster

scan = sys.argv[1]
options = {}
options["data_folder"] = os.path.join(sys.argv[2], scan, "noel_deepFCD_dropoutMC")

modality = [
"_noel_deepFCD_dropoutMC_prob_mean_1.nii.gz",
"_noel_deepFCD_dropoutMC_prob_var_1.nii.gz",
]
data_bayes, data_bayes_var = {}, {}

cwd = os.path.realpath(os.path.dirname(__file__))
# mask to exclude all subcortical findings
options['submask'] = os.path.join(cwd, '../templates', 'subcortical_mask_v3.nii.gz')

# load paths to all the data
data_bayes[scan] = os.path.join(options["data_folder"], scan + str(modality[0]))
data_bayes_var[scan] = os.path.join(options["data_folder"], scan + str(modality[1]))

ea = load_nii(data_bayes[scan]).get_fdata()
ea_var = load_nii(data_bayes_var[scan]).get_fdata()

options["header"] = load_nii(data_bayes[scan]).header

options["t_bin"] = 0.6 # probability threshold
options["l_min"] = 150 # cluster size threshold

scan_keys = []

for k in data_bayes.keys():
scan_keys.append(k)

results = {}
output_scan, results = extractLesionCluster(scan, ea, ea_var, options)

header = load_nii(data_bayes[scan]).header
affine = header.get_qform()
out_scan = nib.Nifti1Image(output_scan, affine=affine, header=header)

results.sort_values("rank")
min_max_scaler = preprocessing.MinMaxScaler()
invert_var = 1 / results["var"]
results["confidence"] = np.round(100.0 * min_max_scaler.fit_transform(invert_var.values.reshape(-1, 1)), 1)
ranked_results = results.sort_values("rank")
ranked_results.reset_index(inplace=True)

labels = []
for N in np.arange(0,len(ranked_results.coords)):
label = read_atlas_peak(atlastype='harvard_oxford', coordinate=ranked_results.coords[N], prob_thresh=5)
# for (perc, l) in label[0]:
# print(perc, l)
labels.append(label[0])
# print(label)

ranked_results['label'] = labels
print(tabulate(ranked_results, headers = 'keys', tablefmt = 'simple'))
68 changes: 68 additions & 0 deletions ci/runner.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
FROM noelmni/cuda:10.0-cudnn7-devel-ubuntu18.04
LABEL maintainer="Ravnoor Singh Gill <ravnoor@gmail.com>" \
org.opencontainers.image.title="Self-hosted Github Actions runner for deepFCD" \
org.opencontainers.image.description="Automated Detection of Focal Cortical Dysplasia using Deep Learning" \
org.opencontainers.image.licenses="BSD-3-Clause" \
org.opencontainers.image.source="https://github.com/NOEL-MNI/deepFCD" \
org.opencontainers.image.url="https://github.com/NOEL-MNI/deepFCD"

# manually update outdated repository key
# fixes invalid GPG error: https://forums.developer.nvidia.com/t/gpg-error-http-developer-download-nvidia-com-compute-cuda-repos-ubuntu1804-x86-64/212904
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub

ARG RUNNER_VERSION=2.309.0
ARG NVM_VERSION=0.39.5

ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends \
bash build-essential curl jq libssl-dev libffi-dev \
nano python3-dev software-properties-common unzip wget

# install git 2.17+
RUN add-apt-repository ppa:git-core/candidate -y
RUN apt-get update
RUN apt-get install -y git

RUN apt-get remove nodejs npm

# github actions needs a non-root to run
RUN useradd -m ga
WORKDIR /home/ga/actions-runner
ENV HOME=/home/ga

# https://stackoverflow.com/questions/25899912/how-to-install-nvm-in-docker/60137919#60137919
SHELL ["/bin/bash", "--login", "-i", "-c"]
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v${NVM_VERSION}/install.sh | bash
RUN source /root/.bashrc && nvm install 16
SHELL ["/bin/bash", "--login", "-c"]

# install Github Actions runner
RUN curl -s -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz && \
tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz && \
./bin/installdependencies.sh

# RUN wget -c https://github.com/NixOS/patchelf/releases/download/0.18.0/patchelf-0.18.0-x86_64.tar.gz && \
# ./bin/patchelf --set-interpreter /opt/glibc-2.28/lib/ld-linux-x86-64.so.2 --set-rpath /opt/glibc-2.28/lib/ /home/ga/.nvm/versions/node/v20.6.1/bin/node

# add over the start.sh script
ADD start-runner.sh start.sh

# make the script executable
RUN chmod +x start.sh

# set permission and user to ga
RUN chown -R ga /home/ga
USER ga

# install Conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py38_23.5.2-0-Linux-x86_64.sh && \
/bin/bash Miniconda3-py38_23.5.2-0-Linux-x86_64.sh -b && \
rm Miniconda3-py38_23.5.2-0-Linux-x86_64.sh && \
echo '. ~/miniconda3/etc/profile.d/conda.sh' >> ~/.bashrc

# create a dir to store inputs and outputs
RUN mkdir ~/io

# set the entrypoint to the start.sh script
ENTRYPOINT ["./start.sh"]
29 changes: 29 additions & 0 deletions ci/runner.docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
version: '3.9'

services:
runner:
image: noelmni/deep-fcd:runner_latest
# command: '/app/inference.py FCD_001 T1.nii.gz FLAIR.nii.gz /io cuda0 1 1'
# command: nvidia-smi
# entrypoint: /bin/bash
build:
context: .
dockerfile: runner.Dockerfile
args:
RUNNER_VERSION: '2.309.0'
NVM_VERSION: '0.39.5'
deploy:
resources:
reservations:
devices:
- driver: nvidia
# count: 1
device_ids: ['1']
capabilities: [gpu]
# volumes:
# - '$PWD/io:/io'
# - /var/run/docker.sock:/var/run/docker.sock
environment:
GH_TOKEN: ${GH_TOKEN}
GH_OWNER: ${GH_OWNER}
GH_REPOSITORY: ${GH_REPOSITORY}
29 changes: 29 additions & 0 deletions ci/start-runner.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/usr/bin/env bash

GH_OWNER=${GH_OWNER}
GH_REPOSITORY=${GH_REPOSITORY}
GH_TOKEN=${GH_TOKEN}

HOSTNAME=$(hostname | cut -d "." -f 1)
RUNNER_SUFFIX=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 7 | head -n 1)
RUNNER_NAME="minion-${HOSTNAME}-${RUNNER_SUFFIX}"

echo ${RUNNER_NAME}

REG_TOKEN=$(curl -sX POST -H "Accept: application/vnd.github+json" -H "Authorization: token ${GH_TOKEN}" https://api.github.com/repos/${GH_OWNER}/${GH_REPOSITORY}/actions/runners/registration-token | jq .token --raw-output)

cd /home/ga/actions-runner

./config.sh --unattended --url https://github.com/${GH_OWNER}/${GH_REPOSITORY} --token ${REG_TOKEN} --name ${RUNNER_NAME}

cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}

export PATH=/home/ga/miniconda3/condabin:/home/ga/miniconda3/bin:$PATH

trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM

./run.sh & wait $!
12 changes: 12 additions & 0 deletions docs/reporting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
| | index | confidence | coords | id | probability | rank | var | label (atlas probability, ROI) |
|----|---------|--------------|--------------------------|---------|---------------|--------|-----------|--------------------------------------------------------------|
| 0 | 13 | 68.3651 | [59.25, -31.36, -21.23] | BAR_002 | 0.823546 | 1 | 0.0958885 | [44.0, 'Right_Inferior_Temporal_Gyrus_posterior_division'] |
| 1 | 11 | 67.6104 | [36.06, 3.44, -1.15] | BAR_002 | 0.831096 | 2 | 0.0969588 | [19.0, 'Right_Insular_Cortex'] |
| 2 | 5 | 57.0031 | [-8.26, -87.08, 1.6] | BAR_002 | 0.775114 | 3 | 0.115001 | [54.0, 'Left_Intracalcarine_Cortex'] |
| 3 | 4 | 55.6648 | [-29.31, 0.54, -21.63] | BAR_002 | 0.784827 | 4 | 0.117766 | [58.0, 'Left_Amygdala'] |
| 4 | 10 | 53.5866 | [27.59, -7.48, -36.15] | BAR_002 | 0.73001 | 5 | 0.122333 | [44.0, 'Right_Parahippocampal_Gyrus_anterior_division'] |
| 5 | 7 | 52.9773 | [12.19, -91.22, 6.13] | BAR_002 | 0.740456 | 6 | 0.12374 | [45.0, 'Right_Occipital_Pole'] |
| 6 | 0 | 51.3822 | [-58.15, -55.16, -22.24] | BAR_002 | 0.754496 | 7 | 0.127582 | [63.0, 'Left_Inferior_Temporal_Gyrus_temporooccipital_part'] |
| 7 | 1 | 46.3947 | [-40.54, -17.35, -33.51] | BAR_002 | 0.665466 | 8 | 0.141297 | [34.0, 'Left_Inferior_Temporal_Gyrus_posterior_division'] |
| 8 | 9 | 46.3668 | [17.55, -72.52, 13.29] | BAR_002 | 0.718559 | 9 | 0.141382 | [23.0, 'Right_Intracalcarine_Cortex'] |
| 9 | 2 | 46.3555 | [-37.61, 7.89, -6.89] | BAR_002 | 0.736747 | 10 | 0.141416 | [98.0, 'Left_Insular_Cortex'] |
9 changes: 9 additions & 0 deletions tests/run_tests.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -e

pushd "$(dirname "$0")"

echo "Running all tests"
python3 test_deepFCD.py $@

popd
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
150 changes: 150 additions & 0 deletions tests/test_deepFCD.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
"""
Test deepFCD.py
nptest.assert_allclose
self.assertEqual
self.assertTrue
"""

import os
import unittest
from tempfile import mktemp

import ants
import numpy as np
import numpy.testing as nptest

from utils import compare_images


params = {}
if os.environ.get("CI_TESTING") is not None:
params["CI_TESTING_PRED_DIR"] = os.environ.get("CI_TESTING_PRED_DIR")
params["CI_TESTING_PATIENT_ID"] = os.environ.get("CI_TESTING_PATIENT_ID")
else:
params["CI_TESTING_PRED_DIR"] = "/host/hamlet/local_raid/data/ravnoor/sandbox/pytests"
params["CI_TESTING_PATIENT_ID"] = "sub-00055"


class TestModule_deepFCD(unittest.TestCase):

def setUp(self):
# load predictions from a previous validated run (known as ground-truth labels in this context)
self.gt_deepMask = ants.image_read('segmentations/sub-00055/sub-00055_brain_mask_final.nii.gz').clone('unsigned int')
self.gt_deepFCD_mean = ants.image_read('segmentations/sub-00055/noel_deepFCD_dropoutMC/sub-00055_noel_deepFCD_dropoutMC_prob_mean_1.nii.gz').clone('float')
self.gt_deepFCD_var = ants.image_read('segmentations/sub-00055/noel_deepFCD_dropoutMC/sub-00055_noel_deepFCD_dropoutMC_prob_var_1.nii.gz').clone('float')

pred_path = os.path.join(params["CI_TESTING_PRED_DIR"], params["CI_TESTING_PATIENT_ID"])
# load predicitions from the most recent run
self.pred_deepMask = ants.image_read(pred_path + '/' + params["CI_TESTING_PATIENT_ID"] + '_brain_mask_final.nii.gz').clone('unsigned int')
self.pred_deepFCD_mean = ants.image_read(pred_path + '/noel_deepFCD_dropoutMC/' + params["CI_TESTING_PATIENT_ID"] + '_noel_deepFCD_dropoutMC_prob_mean_1.nii.gz').clone('float')
self.pred_deepFCD_var = ants.image_read(pred_path + '/noel_deepFCD_dropoutMC/' + params["CI_TESTING_PATIENT_ID"] + '_noel_deepFCD_dropoutMC_prob_var_1.nii.gz').clone('float')

self.imgs = [self.pred_deepMask, self.pred_deepFCD_mean, self.pred_deepFCD_var]
self.pixeltypes = ['unsigned char', 'unsigned int', 'float']

def tearDown(self):
pass

def test_image_header_info(self):
# def image_header_info(filename):
for img in self.imgs:
img.set_spacing([6.9]*img.dimension)
img.set_origin([3.6]*img.dimension)
tmpfile = mktemp(suffix='.nii.gz')
ants.image_write(img, tmpfile)

info = ants.image_header_info(tmpfile)
self.assertEqual(info['dimensions'], img.shape)
nptest.assert_allclose(info['direction'], img.direction)
self.assertEqual(info['nComponents'], img.components)
self.assertEqual(info['nDimensions'], img.dimension)
self.assertEqual(info['origin'], img.origin)
self.assertEqual(info['pixeltype'], img.pixeltype)
self.assertEqual(info['pixelclass'], 'vector' if img.has_components else 'scalar')
self.assertEqual(info['spacing'], img.spacing)

try:
os.remove(tmpfile)
except:
pass

# non-existent file
with self.assertRaises(Exception):
tmpfile = mktemp(suffix='.nii.gz')
ants.image_header_info(tmpfile)


def test_image_read_write(self):
# def image_read(filename, dimension=None, pixeltype='float'):
# def image_write(image, filename):

# test scalar images
for img in self.imgs:
img = (img - img.min()) / (img.max() - img.min())
img = img * 255.
img = img.clone('unsigned char')
for ptype in self.pixeltypes:
img = img.clone(ptype)
tmpfile = mktemp(suffix='.nii.gz')
ants.image_write(img, tmpfile)

img2 = ants.image_read(tmpfile)
self.assertTrue(ants.image_physical_space_consistency(img,img2))
self.assertEqual(img2.components, img.components)
nptest.assert_allclose(img.numpy(), img2.numpy())

# unsupported ptype
with self.assertRaises(Exception):
ants.image_read(tmpfile, pixeltype='not-suppoted-ptype')

# test saving/loading as npy
for img in self.imgs:
tmpfile = mktemp(suffix='.npy')
ants.image_write(img, tmpfile)
img2 = ants.image_read(tmpfile)

self.assertTrue(ants.image_physical_space_consistency(img,img2))
self.assertEqual(img2.components, img.components)
nptest.assert_allclose(img.numpy(), img2.numpy())

# with no json header
arr = img.numpy()
tmpfile = mktemp(suffix='.npy')
np.save(tmpfile, arr)
img2 = ants.image_read(tmpfile)
nptest.assert_allclose(img.numpy(), img2.numpy())

# non-existant file
with self.assertRaises(Exception):
tmpfile = mktemp(suffix='.nii.gz')
ants.image_read(tmpfile)


def test_brain_mask_segmentation(self):
metric = compare_images(self.gt_deepMask, self.pred_deepMask)
print("overlap of the brain mask with the label: {}".format(metric))
# set relative tolerance to 0.05
# predicted image is expected to have overlap within 0.05
nptest.assert_allclose(1., metric, rtol=0.05, atol=0)


def test_deepFCD_segmentation_mean(self):
metric = compare_images(self.gt_deepFCD_mean, self.pred_deepFCD_mean, metric_type='correlation')
print("correlation of the mean probability map with the the label: {}".format(metric))
# set relative tolerance to 0.05
# predicted image is expected to have correlation within 0.05
nptest.assert_allclose(1., metric, rtol=0.05, atol=0)


def test_deepFCD_segmentation_var(self):
metric = compare_images(self.gt_deepFCD_var, self.pred_deepFCD_var, metric_type='correlation')
print("correlation of the mean uncertainty map with the the label: {}".format(metric))
# set relative tolerance to 0.05
# predicted image is expected to have correlation within 0.05
nptest.assert_allclose(1., metric, rtol=0.05, atol=0)


if __name__ == '__main__':
unittest.main()
85 changes: 85 additions & 0 deletions tests/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import ants
import numpy as np

def dilate_labels(label, dilated_label_fname):
"""
Apply morphological operations to an image
ANTsR function: `morphology`
Arguments
---------
input : ANTsImage
input image
operation : string
operation to apply
"close" Morpholgical closing
"dilate" Morpholgical dilation
"erode" Morpholgical erosion
"open" Morpholgical opening
radius : scalar
radius of structuring element
mtype : string
type of morphology
"binary" Binary operation on a single value
"grayscale" Grayscale operations
value : scalar
value to operation on (type='binary' only)
shape : string
shape of the structuring element ( type='binary' only )
"ball" spherical structuring element
"box" box shaped structuring element
"cross" cross shaped structuring element
"annulus" annulus shaped structuring element
"polygon" polygon structuring element
radius_is_parametric : boolean
used parametric radius boolean (shape='ball' and shape='annulus' only)
thickness : scalar
thickness (shape='annulus' only)
lines : integer
number of lines in polygon (shape='polygon' only)
include_center : boolean
include center of annulus boolean (shape='annulus' only)
Returns
-------
ANTsImage
Example
-------
>>> import ants
>>> fi = ants.image_read( ants.get_ants_data('r16') , 2 )
>>> mask = ants.get_mask( fi )
>>> dilated_ball = ants.morphology( mask, operation='dilate', radius=3, mtype='binary', shape='ball')
>>> eroded_box = ants.morphology( mask, operation='erode', radius=3, mtype='binary', shape='box')
>>> opened_annulus = ants.morphology( mask, operation='open', radius=5, mtype='binary', shape='annulus', thickness=2)
"""
label = ants.image_read(label)
ants.morphology(label, operation='dilate', radius=30, mtype='binary', shape='ball').to_filename(dilated_label_fname)


def compare_images(predicted_image, ground_truth_image, metric_type='correlation'):
"""
Measure similarity between two images.
NOTE: Similarity is actually returned as distance (i.e. dissimilarity)
per ITK/ANTs convention. E.g. using Correlation metric, the similarity
of an image with itself returns -1.
"""
# predicted_image = ants.image_read(predicted_image)
# ground_truth_image = ants.image_read(ground_truth_image)
if metric_type == 'correlation':
metric = ants.image_similarity(predicted_image, ground_truth_image, metric_type='ANTSNeighborhoodCorrelation')
metric = np.abs(metric)
else:
metric = ants.label_overlap_measures(predicted_image, ground_truth_image).TotalOrTargetOverlap[1]

return metric