Skip to content

Commit

Permalink
Merge pull request #26 from fliem/grouptabs
Browse files Browse the repository at this point in the history
Export of group-level tables for aparc- and aseg-stats
  • Loading branch information
chrisgorgo authored Feb 13, 2017
2 parents 1b991ab + fb74f81 commit c2892d6
Show file tree
Hide file tree
Showing 4 changed files with 193 additions and 76 deletions.
5 changes: 5 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,11 @@ ENV PATH /opt/freesurfer/bin:/opt/freesurfer/fsfast/bin:/opt/freesurfer/tktools:
ENV PYTHONPATH=""
RUN echo "cHJpbnRmICJrcnp5c3p0b2YuZ29yZ29sZXdza2lAZ21haWwuY29tXG41MTcyXG4gKkN2dW12RVYzelRmZ1xuRlM1Si8yYzFhZ2c0RVxuIiA+IC9vcHQvZnJlZXN1cmZlci9saWNlbnNlLnR4dAo=" | base64 -d | sh

# make freesurfer python scripts python3 ready
RUN 2to3-3.4 -w $FREESURFER_HOME/bin/aparcstats2table
RUN 2to3-3.4 -w $FREESURFER_HOME/bin/asegstats2table
RUN 2to3-3.4 -w $FREESURFER_HOME/bin/*.py

RUN mkdir /scratch
RUN mkdir /local-scratch

Expand Down
173 changes: 101 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,76 +21,90 @@ https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferMethodsCitation
### Usage
This App has the following command line arguments:

$ docker run -ti --rm bids/freesurfer --help
usage: run.py [-h]
[--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
[--template_name TEMPLATE_NAME]
bids_dir output_dir {participant,group}

FreeSurfer recon-all + custom template generation.

NOTE: if scripts/IsRunning is present, this pipeline assumes recon-all was
interrupted and removes the directory then re-runs the processing stream.

positional arguments:
bids_dir The directory with the input dataset formatted
according to the BIDS standard.
output_dir The directory where the output files should be stored.
If you are running group level analysis this folder
should be prepopulated with the results of
theparticipant level analysis.
{participant,group} Level of the analysis that will be performed. Multiple
participant level analyses can be run independently
(in parallel) using the same output_dir.
required arguments:
--license_key LICENSE_KEY
FreeSurfer license key - letters and numbers after "*"
in the email you received after registration. To
register (for free) visit
https://surfer.nmr.mgh.harvard.edu/registration.html

optional arguments:
-h, --help show this help message and exit
--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
The label of the participant that should be analyzed.
The label corresponds to sub-<participant_label> from
the BIDS spec (so it does not include "sub-"). If this
parameter is not provided all subjects should be
analyzed. Multiple participants can be specified with
a space separated list.
--n_cpus N_CPUS Number of CPUs/cores available to use. (Default is 1)
--stages {all,autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon-pial,autorecon3,autorecon-all}
Recon-all stages to run. (Default is autorecon-all)
--template_name TEMPLATE_NAME
Name for the custom group level template generated
for this dataset.
--acquisition_label ACQUISITION_LABEL
If the dataset contains multiple T1 weighted images
from different acquisitions which one should be used?
Corresponds to "acq-<acquisition_label>"
--multiple_sessions {longitudinal, multiday}
For datasets with multiday sessions where you do not
want to use the longitudinal pipeline, i.e., sessions
were back-to-back, set this to multiday, otherwise
sessions with T1w data will be considered independent
sessions for longitudinal analysis.
--refine_pial {T2,FLAIR,None,T1only}
If the dataset contains 3D T2 or T2 FLAIR weighted
images (~1x1x1), these can be used to refine the pial
surface. The current default is to look for
appropriate T2s, then look for appropriate FLAIRs
(resolution <1.2mm isovolumetric). If you want to
ignore these, specify None or T1only to generate
surfaces on the T1 alone.
--refine_pial_acquisition_label ACQUISITION_LABEL
If the dataset contains multiple T2 or FLAIR weighted
images from different acquisitions which one should be
used? Corresponds to "acq-<acquisition_label>"
--hires_mode {auto,enable,disable}
Submilimiter (high resolution) processing. 'auto' -
use only if <1.0mm data detected, 'enable' - force on,
'disable' - force off

$ docker run -ti --rm bids/freesurfer --help
usage: run.py [-h]
[--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
[--n_cpus N_CPUS]
[--stages {autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon2-pial,autorecon3,autorecon-all,all}
[{autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon2-pial,autorecon3,autorecon-all,all} ...]]
[--template_name TEMPLATE_NAME] --license_key LICENSE_KEY
[--acquisition_label ACQUISITION_LABEL]
[--multiple_sessions {longitudinal,multiday}]
[--refine_pial {T2,FLAIR,None,T1only}]
[--hires_mode {auto,enable,disable}]
[--parcellations {aparc,aparc.a2009s} [{aparc,aparc.a2009s} ...]]
[--measurements {area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind}
[{area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind} ...]]
[-v]
bids_dir output_dir {participant,group1,group2}
FreeSurfer recon-all + custom template generation.

positional arguments:
bids_dir The directory with the input dataset formatted
according to the BIDS standard.
output_dir The directory where the output files should be stored.
If you are running group level analysis this folder
should be prepopulated with the results of
theparticipant level analysis.
{participant,group1,group2}
Level of the analysis that will be performed. Multiple
participant level analyses can be run independently
(in parallel) using the same output_dir. "goup1"
creates study specific group template. "group2 exports
group stats tables for cortical parcellation and
subcortical segmentation.

optional arguments:
-h, --help show this help message and exit
--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
The label of the participant that should be analyzed.
The label corresponds to sub-<participant_label> from
the BIDS spec (so it does not include "sub-"). If this
parameter is not provided all subjects should be
analyzed. Multiple participants can be specified with
a space separated list.
--n_cpus N_CPUS Number of CPUs/cores available to use.
--stages {autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon2-pial,autorecon3,autorecon-all,all}
[{autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon2-pial,autorecon3,autorecon-all,all} ...]
Autorecon stages to run.
--template_name TEMPLATE_NAME
Name for the custom group level template generated for
this dataset
--license_key LICENSE_KEY
FreeSurfer license key - letters and numbers after "*"
in the email you received after registration. To
register (for free) visit
https://surfer.nmr.mgh.harvard.edu/registration.html
--acquisition_label ACQUISITION_LABEL
If the dataset contains multiple T1 weighted images
from different acquisitions which one should be used?
Corresponds to "acq-<acquisition_label>"
--multiple_sessions {longitudinal,multiday}
For datasets with multiday sessions where you do not
want to use the longitudinal pipeline, i.e., sessions
were back-to-back, set this to multiday, otherwise
sessions with T1w data will be considered independent
sessions for longitudinal analysis.
--refine_pial {T2,FLAIR,None,T1only}
If the dataset contains 3D T2 or T2 FLAIR weighted
images (~1x1x1), these can be used to refine the pial
surface. If you want to ignore these, specify None or
T1only to base surfaces on the T1 alone.
--hires_mode {auto,enable,disable}
Submilimiter (high resolution) processing. 'auto' -
use only if <1.0mm data detected, 'enable' - force on,
'disable' - force off
--parcellations {aparc,aparc.a2009s} [{aparc,aparc.a2009s} ...]
Group2 option: cortical parcellation(s) to extract
stats from.
--measurements {area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind}
[{area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind} ...]
Group2 option: cortical measurements to extract stats for.
-v, --version show program's version number and exit


#### Participant level
To run it in participant level mode (for one participant):

docker run -ti --rm \
Expand All @@ -100,12 +114,27 @@ To run it in participant level mode (for one participant):
/bids_dataset /outputs participant --participant_label 01 \
--license_key "XXXXXXXX"

After doing this for all subjects (potentially in parallel) the group level analysis
can be run:

#### Group level
After doing this for all subjects (potentially in parallel) the
group level analyses can be run.

To create a study specific template run:

docker run -ti --rm \
-v /Users/filo/data/ds005:/bids_dataset:ro \
-v /Users/filo/outputs:/outputs \
bids/freesurfer \
/bids_dataset /outputs group1 \
--license_key "XXXXXXXX"

To export tables with aggregated measurements within regions of
cortical parcellation and subcortical segementation run:

docker run -ti --rm \
-v /Users/filo/data/ds005:/bids_dataset:ro \
-v /Users/filo/outputs:/outputs \
bids/freesurfer \
/bids_dataset /outputs group \
/bids_dataset /outputs group2 \
--license_key "XXXXXXXX"
Also see *--parcellations* and *--measurements* arguments.
13 changes: 13 additions & 0 deletions circle.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
general:
artifacts:
- "~/outputs1"
- "~/outputs2"

machine:
services:
- docker #don't use 1.10 - caching is broken
Expand All @@ -10,6 +15,9 @@ dependencies:
override:
- if [[ -e ~/docker/image.tar ]]; then docker load -i ~/docker/image.tar; fi
- if [[ ! -d ~/data/ds114_test1 ]]; then wget -c -O ${HOME}/ds114_test1.tar "https://files.osf.io/v1/resources/9q7dv/providers/osfstorage/57e54a326c613b01d7d3ed90" && mkdir -p ${HOME}/data && tar xf ${HOME}/ds114_test1.tar -C ${HOME}/data; fi
- if [[ ! -d ~/data/ds114_test2 ]]; then wget -c -O ${HOME}/ds114_test2.tar "https://files.osf.io/v1/resources/9q7dv/providers/osfstorage/57e549f9b83f6901d457d162" && mkdir -p ${HOME}/data && tar xf ${HOME}/ds114_test2.tar -C ${HOME}/data; fi
- if [[ ! -d ~/data/ds114_test1_freesurfer ]]; then wget -c -O ${HOME}/ds114_test1_freesurfer.tar "https://files.osf.io/v1/resources/9q7dv/providers/osfstorage/5882adf3b83f6901f564da49" && mkdir -p ${HOME}/data && tar xf ${HOME}/ds114_test1_freesurfer.tar -C ${HOME}/data; fi
- if [[ ! -d ~/data/ds114_test2_freesurfer ]]; then wget -c -O ${HOME}/ds114_test2_freesurfer.tar "https://files.osf.io/v1/resources/9q7dv/providers/osfstorage/5882b0e3b83f6901fb64da18" && mkdir -p ${HOME}/data && tar xf ${HOME}/ds114_test2_freesurfer.tar -C ${HOME}/data; fi
- git describe --tags > version
- docker build -t bids/${CIRCLE_PROJECT_REPONAME} . :
timeout: 21600
Expand All @@ -21,6 +29,11 @@ test:
- docker run -ti --rm --read-only -v /tmp:/tmp -v /var/tmp:/var/tmp -v ${HOME}/data/ds114_test1:/bids_dataset bids/${CIRCLE_PROJECT_REPONAME,,} --version
- docker run -ti --rm --read-only -v /tmp:/tmp -v /var/tmp:/var/tmp -v ${HOME}/data/ds114_test1:/bids_dataset -v ${HOME}/outputs1:/outputs bids/${CIRCLE_PROJECT_REPONAME,,} /bids_dataset /outputs participant --participant_label 01 --license_key="~/test.key" --stages autorecon1:
timeout: 21600
# group2 tests
- docker run -ti --rm --read-only -v /tmp:/tmp -v /var/tmp:/var/tmp -v ${HOME}/data/ds114_test1:/bids_dataset -v ${HOME}/data/ds114_test1_freesurfer:/outputs bids/${CIRCLE_PROJECT_REPONAME,,} /bids_dataset /outputs group2 --license_key="~/test.key" && mkdir -p ${HOME}/outputs1/ && sudo mv ${HOME}/data/ds114_test1_freesurfer/00_group* ${HOME}/outputs1/ && cat ${HOME}/outputs1/00_group2_stats_tables/lh.aparc.thickness.tsv :
timeout: 21600
- docker run -ti --rm --read-only -v /tmp:/tmp -v /var/tmp:/var/tmp -v ${HOME}/data/ds114_test2:/bids_dataset -v ${HOME}/data/ds114_test2_freesurfer:/outputs bids/${CIRCLE_PROJECT_REPONAME,,} /bids_dataset /outputs group2 --license_key="~/test.key" && mkdir -p ${HOME}/outputs2/ && sudo mv ${HOME}/data/ds114_test2_freesurfer/00_group* ${HOME}/outputs2/ && cat ${HOME}/outputs2/00_group2_stats_tables/lh.aparc.thickness.tsv :
timeout: 21600

deployment:
hub:
Expand Down
Loading

0 comments on commit c2892d6

Please sign in to comment.