Skip to content

Commit

Permalink
Merge pull request #10 from NOEL-MNI/dev
Browse files Browse the repository at this point in the history
Fix dependency incompatibilities between `deepFCD` and `deepMask` + more
  • Loading branch information
ravnoor authored Aug 3, 2022
2 parents ba8712e + 2739bb6 commit 28fcb18
Show file tree
Hide file tree
Showing 11 changed files with 222 additions and 94 deletions.
21 changes: 11 additions & 10 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -39,20 +39,21 @@ RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.11.0-Linu
&& /bin/bash Miniconda3-py37_4.11.0-Linux-x86_64.sh -b -p /home/user/conda \
&& rm -f Miniconda3-py37_4.11.0-Linux-x86_64.sh

RUN python -m pip install --upgrade --force --ignore-installed pip
RUN git clone --depth 1 https://github.com/NOEL-MNI/deepMask.git \
&& rm -rf deepMask/.git

COPY app/requirements.txt /app/requirements.txt

RUN python -m pip install -r /app/requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html

RUN conda install -c conda-forge pygpu==0.7.6

RUN pip cache purge
RUN eval "$(conda shell.bash hook)" \
&& conda create -n preprocess python=3.7 \
&& conda activate preprocess \
&& python -m pip install -r deepMask/app/requirements.txt \
&& conda deactivate

COPY app/ /app/

RUN sudo chmod -R 777 /app && sudo chmod +x /app/inference.py
RUN python -m pip install -r /app/requirements.txt \
&& conda install -c conda-forge pygpu==0.7.6 \
&& pip cache purge

RUN git clone --depth 1 https://github.com/NOEL-MNI/deepMask.git && rm -rf deepMask/.git
RUN sudo chmod -R 777 /app && sudo chmod +x /app/inference.py

CMD ["python3"]
57 changes: 45 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,29 +41,38 @@
2. Keras == 2.2.4
3. Theano == 1.0.4
4. ANTsPy == 0.3.2 (for MRI preprocessing)
5. PyTorch == 1.4.0 (for deepMask)
4. ANTsPyNet == 0.1.8 (for MRI preprocessing)
5. PyTorch == 1.11.0 (for deepMask)
6. h5py == 2.10.0
+ app/requirements.txt
+ app/deepMask/app/requirements.txt
```

## Installation

```bash
# clone the repo with the deepMask submodule
git clone --depth 1 --recurse-submodules -j2 https://github.com/NOEL-MNI/deepFCD.git
git clone --recurse-submodules -j2 https://github.com/NOEL-MNI/deepFCD.git
cd deepFCD

# install Miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh -b -p $HOME/miniconda

# create and activate a Conda environment
# create and activate a Conda environment for preprocessing
conda create -n preprocess python=3.7
conda activate preprocess
# install dependencies using pip
python -m pip install -r app/deepMask/app/requirements.txt
conda deactivate

# create and activate a Conda environment for deepFCD
conda create -n deepFCD python=3.7
conda activate deepFCD

# install dependencies using pip
python -m pip install -r app/requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html
python -m pip install -r app/requirements.txt
conda install -c conda-forge pygpu=0.7.6

```


Expand All @@ -85,17 +94,36 @@ ${IO_DIRECTORY}

### 2. Training routine [TODO]

### 3.1 Inference
### 3.1 Inference (CPU)
```bash
chmod +x ./app/inference.py # make the script executable -ensure you have the requisite permissions
chmod +x ./app/inference.py # make the script executable -ensure you have the requisite permissions
export OMP_NUM_THREADS=6 \ # specify number of threads to initialize when using the CPU - by default this variable is set to half the number of available logical cores
./app/inference.py \ # the script to perform inference on the multimodal MRI images
${PATIENT_ID} \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
${T1_IMAGE} \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
${FLAIR_IMAGE} \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
${IO_DIRECTORY} \ # input/output directory
cuda0 # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
cpu \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
1 \ # perform (`1`) or not perform (`0`) brain extraction
1 \ # perform (`1`) or not perform (`0`) image pre-processing

```
### 3.2 Inference using Docker (GPU)

### 3.2 Inference (GPU)
```bash
chmod +x ./app/inference.py # make the script executable -ensure you have the requisite permissions
./app/inference.py \ # the script to perform inference on the multimodal MRI images
${PATIENT_ID} \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
${T1_IMAGE} \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
${FLAIR_IMAGE} \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
${IO_DIRECTORY} \ # input/output directory
cuda0 \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
1 \ # perform (`1`) or not perform (`0`) brain extraction
1 \ # perform (`1`) or not perform (`0`) image pre-processing

```

### 3.3 Inference using Docker (GPU), requires [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
```bash
docker run --rm -it --init \
--gpus=all \ # expose the host GPUs to the guest docker container
Expand All @@ -107,21 +135,26 @@ docker run --rm -it --init \
${T1_IMAGE} \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
${FLAIR_IMAGE} \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
/io \ # input/output directory within the container mapped to ${IO_DIRECTORY} or ${PWD} [ DO NOT MODIFY]
cuda0 # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
cuda0 \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
1 \ # perform (`1`) or not perform (`0`) brain extraction
1 \ # perform (`1`) or not perform (`0`) image pre-processing
```

### 3.3 Inference using Docker (CPU)
### 3.4 Inference using Docker (CPU)
```bash
docker run --rm -it --init \
--user="$(id -u):$(id -g)" \ # map user permissions appropriately
--volume="$PWD:/io" \ # $PWD refers to the present working directory containing the input images, can be modified to a local host directory
--env OMP_NUM_THREADS=6 \ # specify number of threads to initialize - by default this variable is set to half the number of available logical cores
noelmni/deep-fcd:latest \ # docker image containing all the necessary software dependencies
/app/inference.py \ # the script to perform inference on the multimodal MRI images
${PATIENT_ID} \ # prefix for the filenames; for example: FCD_001 (needed for outputs only)
${T1_IMAGE} \ # T1-weighted image; for example: FCD_001_t1.nii.gz or t1.nii.gz [T1 is specified before FLAIR - order is important]
${FLAIR_IMAGE} \ # T2-weighted FLAIR image; for example: FCD_001_t2.nii.gz or flair.nii.gz [T1 is specified before FLAIR - order is important]
/io \ # input/output directory within the container mapped to ${IO_DIRECTORY} or ${PWD} [ DO NOT MODIFY]
cpu # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
cpu \ # toggle b/w CPU/GPU - string specifies CPU ('cpu') or GPU ID ('cudaX', where N is in the range (0,N), where N is the total number of installed GPUs)
1 \ # perform (`1`) or not perform (`0`) brain extraction
1 \ # perform (`1`) or not perform (`0`) image pre-processing
```

## License
Expand Down
2 changes: 1 addition & 1 deletion app/deepMask
93 changes: 43 additions & 50 deletions app/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,23 @@

import os
import sys
import logging
import multiprocessing
from mo_dots import Data
import subprocess
from config.experiment import options
import warnings
warnings.filterwarnings('ignore')
import time
import numpy as np
import setproctitle as spt
from tqdm import tqdm
from utils.helpers import *

logging.basicConfig(level=logging.DEBUG,
style='{',
datefmt='%Y-%m-%d %H:%M:%S',
format='{asctime} {levelname} {filename}:{lineno}: {message}')

os.environ["KERAS_BACKEND"] = "theano"

Expand All @@ -18,74 +27,58 @@
if options['cuda'].startswith('cuda1'):
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cuda1,floatX=float32,dnn.enabled=False"
elif options['cuda'].startswith('cpu'):
os.environ['OMP_NUM_THREADS'] = str(multiprocessing.cpu_count() // 2)
var = os.getenv('OMP_NUM_THREADS', None)
cores = str(multiprocessing.cpu_count() // 2)
var = os.getenv('OMP_NUM_THREADS', cores)
try:
print("# of threads initialized: {}".format(int(var)))
logging.info("# of threads initialized: {}".format(int(var)))
except ValueError:
raise TypeError("The environment variable OMP_NUM_THREADS"
" should be a number, got '%s'." % var)
# os.environ['openmp'] = 'True'
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cpu,openmp=True,floatX=float32"
else:
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cuda0,floatX=float32,dnn.enabled=False"
print(os.environ["THEANO_FLAGS"])
logging.info(os.environ["THEANO_FLAGS"])

from models.noel_models_keras import *
from keras.models import load_model
from keras import backend as K
from utils.metrics import *
from utils.base import *

# deepMask imports
import torch
from mo_dots import Data
from deepMask.app.utils.data import *
from deepMask.app.utils.deepmask import *
from deepMask.app.utils.image_processing import noelImageProcessor
import deepMask.app.vnet as vnet

# configuration
args = Data()
args.dir = sys.argv[4]
args.id = sys.argv[1]
args.brain_masking = True # set to True or any non-zero value for brain extraction or skull-removal, False otherwise
args.preprocess = True # co-register T1 and T2 contrasts before brain extraction
args.outdir = os.path.join(args.dir, args.id)
args.seed = 666
args.t1_fname = sys.argv[2]
args.t2_fname = sys.argv[3]
args.dir = sys.argv[4]
if not os.path.isabs(args.dir):
args.dir = os.path.abspath(args.dir)

args.brain_masking = int(sys.argv[6]) # set to True or any non-zero value for brain extraction or skull-removal, False otherwise
args.preprocess = int(sys.argv[7]) # co-register T1 and T2 images to MNI152 space and N3 correction before brain extraction (True/False)
args.outdir = os.path.join(args.dir, args.id)

args.t1 = os.path.join(args.outdir, args.t1_fname)
args.t2 = os.path.join(args.outdir, args.t2_fname)
cwd = os.path.dirname(__file__)

if args.brain_masking:
# trained weights based on manually corrected masks from
# 153 patients with cortical malformations
args.inference = os.path.join(cwd, 'deepMask/app/weights', 'vnet_masker_model_best.pth.tar')
# resize all input images to this resolution matching training data
args.resize = (160,160,160)
args.use_gpu = False
args.cuda = torch.cuda.is_available() and args.use_gpu
torch.manual_seed(args.seed)
args.device_ids = list(range(torch.cuda.device_count()))
if args.cuda:
torch.cuda.manual_seed(args.seed)
print("build vnet, using GPU")
else:
print("build vnet, using CPU")
model = vnet.build_model(args)
template = os.path.join(cwd, 'deepMask/app/template', 'mni_icbm152_t1_tal_nlin_sym_09a.nii.gz')
cwd = os.path.realpath(os.path.dirname(__file__))

if bool(args.brain_masking):
if options['cuda'].startswith('cuda'):
args.use_gpu = True
else:
args.use_gpu = False
# MRI pre-processing configuration
args.output_suffix = '_brain_final.nii.gz'

noelImageProcessor(id=args.id, t1=args.t1, t2=args.t2, output_suffix=args.output_suffix, output_dir=args.outdir, template=template, usen3=True, args=args, model=model, preprocess=args.preprocess).pipeline()

preprocess_sh = os.path.join(cwd, 'preprocess.sh')
subprocess.check_call([preprocess_sh, args.id, args.t1_fname, args.t2_fname, args.dir, bool2str(args.preprocess), bool2str(args.use_gpu)])

args.t1 = os.path.join(args.outdir, args.id + '_t1' + args.output_suffix)
args.t2 = os.path.join(args.outdir, args.id + '_t2' + args.output_suffix)
else:
print("Skipping image preprocessing and brain masking, presumably images are co-registered, bias-corrected, and skull-stripped")
logging.info('Skipping image preprocessing and brain masking, presumably images are co-registered, bias-corrected, and skull-stripped')

# deepFCD configuration
K.set_image_dim_ordering('th')
Expand All @@ -106,7 +99,7 @@
options['test_folder'] = args.dir
options['weight_paths'] = os.path.join(cwd, 'weights')
options['experiment'] = 'noel_deepFCD_dropoutMC'
print("experiment: {}".format(options['experiment']))
logging.info("experiment: {}".format(options['experiment']))
spt.setproctitle(options['experiment'])

# --------------------------------------------------
Expand All @@ -118,13 +111,13 @@
model = off_the_shelf_model(options)

load_weights = os.path.join(options['weight_paths'], 'noel_deepFCD_dropoutMC_model_1.h5')
print("loading DNN1, model[0]: {} exists".format(load_weights)) if os.path.isfile(load_weights) else sys.exit("model[0]: {} doesn't exist".format(load_weights))
logging.info("loading DNN1, model[0]: {} exists".format(load_weights)) if os.path.isfile(load_weights) else sys.exit("model[0]: {} doesn't exist".format(load_weights))
model[0] = load_model(load_weights)

load_weights = os.path.join(options['weight_paths'], 'noel_deepFCD_dropoutMC_model_2.h5')
print("loading DNN2, model[1]: {} exists".format(load_weights)) if os.path.isfile(load_weights) else sys.exit("model[1]: {} doesn't exist".format(load_weights))
logging.info("loading DNN2, model[1]: {} exists".format(load_weights)) if os.path.isfile(load_weights) else sys.exit("model[1]: {} doesn't exist".format(load_weights))
model[1] = load_model(load_weights)
print(model[1].summary())
logging.info(model[1].summary())

# --------------------------------------------------
# test the cascaded model
Expand All @@ -147,16 +140,16 @@
pred_var_fname = os.path.join(options['pred_folder'], scan + '_prob_var_1.nii.gz')

if np.logical_and(os.path.isfile(pred_mean_fname), os.path.isfile(pred_var_fname)):
print("prediction for {} already exists".format(scan))
logging.info("prediction for {} already exists".format(scan))
continue

options['test_scan'] = scan

start = time.time()
print('\n')
print('-'*70)
print("testing the model for scan: {}".format(scan))
print('-'*70)
logging.info('\n')
logging.info('-'*70)
logging.info("testing the model for scan: {}".format(scan))
logging.info('-'*70)

# test0: prediction/stage1
# test1: pred/stage2
Expand All @@ -166,6 +159,6 @@

end = time.time()
diff = (end - start) // 60
print("-"*70)
print("time elapsed: ~ {} minutes".format(diff))
print("-"*70)
logging.info("-"*70)
logging.info("time elapsed: ~ {} minutes".format(diff))
logging.info("-"*70)
59 changes: 59 additions & 0 deletions app/preprocess.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import os
from mo_dots import to_data
import psutil
import torch
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
from deepMask.app.utils.data import *
from deepMask.app.utils.deepmask import *
from deepMask.app.utils.image_processing import noelImageProcessor
import deepMask.app.vnet as vnet

# configuration
# parse command line arguments
parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--id", dest='id', default="FCD_123", help="Alphanumeric patient code")
parser.add_argument("-t1", "--t1_fname", dest='t1_fname', default="t1.nii.gz", help="T1-weighted image")
parser.add_argument("-t2", "--t2_fname", dest='t2_fname', default="t2.nii.gz", help="T2-weighted image")
parser.add_argument("-d", "--dir", dest='dir', default="data/", help="Directory containing the input images")

parser.add_argument("-p", "--preprocess", dest='preprocess', action='store_true', help="Co-register and perform non-uniformity correction of input images")
parser.add_argument("-g", "--use_gpu", dest='use_gpu', action='store_true', help="Compute using GPU, defaults to using CPU")
args = to_data(vars(parser.parse_args()))

# set up parameters
args.outdir = os.path.join(args.dir, args.id)
args.t1 = os.path.join(args.outdir, args.t1_fname)
args.t2 = os.path.join(args.outdir, args.t2_fname)
args.seed = 666

cwd = os.path.dirname(__file__)

# trained weights based on manually corrected masks from
# 153 patients with cortical malformations
args.inference = os.path.join(cwd, 'deepMask/app/weights', 'vnet_masker_model_best.pth.tar')
# resize all input images to this resolution matching training data
args.resize = (160,160,160)
args.cuda = torch.cuda.is_available() and args.use_gpu
torch.manual_seed(args.seed)
args.device_ids = list(range(torch.cuda.device_count()))

mem_size = psutil.virtual_memory().available // (1024*1024*1024) # available RAM in GB
# mem_size = 32
if mem_size < 64 and not args.use_gpu:
os.environ["BRAIN_MASKING"] = "cpu"
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
model = None
else:
if args.cuda:
torch.cuda.manual_seed(args.seed)
print("build vnet, using GPU")
else:
print("build vnet, using CPU")
model = vnet.build_model(args)

template = os.path.join(cwd, 'deepMask/app/template', 'mni_icbm152_t1_tal_nlin_sym_09a.nii.gz')

# MRI pre-processing configuration
args.output_suffix = '_brain_final.nii.gz'

noelImageProcessor(id=args.id, t1=args.t1, t2=args.t2, output_suffix=args.output_suffix, output_dir=args.outdir, template=template, usen3=True, args=args, model=model, preprocess=args.preprocess).pipeline()
Loading

0 comments on commit 28fcb18

Please sign in to comment.