Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate a processing report #40

Closed
5 tasks
sebastientourbier opened this issue Nov 11, 2020 · 1 comment
Closed
5 tasks

Generate a processing report #40

sebastientourbier opened this issue Nov 11, 2020 · 1 comment
Labels
big-effort enhancement New feature or request

Comments

@sebastientourbier
Copy link
Member

sebastientourbier commented Nov 11, 2020

I just created this issue to brainstorm on the way we could integrate in the future the generation of a processing report for facilitating processing quality control on a large dataset.

As I can see, we would need for this:

  • the creation of a PNG image in the run() function of each interface using for instance nilearn for visualization of Nifti, or matplotlib or seaborn for instance for illustration of the motion. At the same time we wil create a new output for each interface that we will use to connect a new interface that will create the report.
  • the creation of a JSON in the run() function of each interface which will describe the information to show in the report (for instance parameters values). Similarly we will create a new output for each interface that we will use to connect to the new interface that will create the report.
  • the creation of an interface that collects the PNG images and JSON files for each stage and fill appropriately an Jinja2 HTML template.
  • Integration of the interface, connection of the new nodes.
  • Move and rename the report to pymialsrtk/sub-01/report/sub-01_report.html for instance using the DataSinker.

Please take all as suggestions and all suggestions are welcome 👍

Suggested Content

  • For each preprocessed LR scan, a nilearn plot_anat figure and a plot of the slice motion indices.
  • A nilearn plot_anat figure of the SR image, parameters of pipelines and SR, ultimately the MSE between the scans and the scans simulated from the SR apply the forward model.

More

Nice resources that show how to use Jinja2 (https://jinja.palletsprojects.com/en/2.11.x/) for report templating:

@sebastientourbier sebastientourbier added enhancement New feature or request big-effort labels Nov 11, 2020
@sebastientourbier
Copy link
Member Author

An other way that would be much more faster to generate a report as a first shot is to use reportlab matplotlib and nilearn.plotting.plot_anat() to generate a PDF report.

Here is a script I wrote for defacing MRIs and that generates a PDF report this way.
This could serve as starting point:

#!/usr/bin/env python
# -*- coding:utf-8 -*-
"""Script that QUICKSHEAR defaces MRIs in a BIDS dataset processed by Connectome Mapper 3 BIDS App.

**Syntax:** python quickshear_deface_mris.py /path/to/bids/directory [--no_deface] [--no_report]

If `--no_deface` is specified, the defacing step is skipped.
If `--no_report` is specified, the report generation step is skipped.

________________________________________________________________
Authors: Sébastien Tourbier Radiology Department, CHUV, Lausanne
Created on 2020 November 17
Version $1.0

"""
# General imports
import os
import datetime
import io
from glob import glob
import subprocess
from argparse import ArgumentParser

import warnings

# External module imports
try:
    import nibabel
    import nibabel.processing
except ImportError:
    print("nibabel not available. Can not resample brain mask to fit raw resolution.")

try:
    import ants
except ImportError:
    print("ANTSpy not installed. Could not co-register raw and resampled T1w scans and project mask.")

try:
    from nilearn.plotting import plot_anat
except ImportError:
    print("nilearn not available. Can not generate image cuts.")

try:
    import bids
    bids.config.set_option('extension_initial_dot', True)
except ImportError:
    print("pybids not available. Can not handle data.")

try:
    import matplotlib.colors as colors
    # matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
    from matplotlib.pyplot import title, suptitle, imshow, cm, figure, colorbar
except ImportError:
    print("matplotlib not available. Can not plot matrix")

try:
    from reportlab.pdfgen import canvas
    from reportlab.lib.pagesizes import A4
    from reportlab.lib.units import inch
    from reportlab.lib.utils import ImageReader
    from reportlab.pdfbase.pdfmetrics import stringWidth
except ImportError:
    print("reportlab not available. Can not generate PDF report")

try:
    from progress.bar import Bar
except ImportError:
    print("progress not available. Can not progress bar for PDF generation")


# Ignore cast warning message generated by nilearn to generate plot_anat()
warnings.filterwarnings("ignore", category=UserWarning)



def create_parser():
    """Create and return the argument parser of this script."""
    p = ArgumentParser(description='Script that QUICKSHEAR defaces MRIs in a BIDS dataset '
                                   'processed by Connectome Mapper 3 BIDS App')
    p.add_argument('bids_dir', help='The directory with the input dataset '
	                                'formatted according to the BIDS standard with '
                                    'Connectome Mapper 3 BIDS App derivatives.')
    p.add_argument("--no_report", default=False, action="store_true" , help="Do not create report if this flag is specified.")
    p.add_argument("--no_deface", default=False, action="store_true" , help="Skip defacing steps if this flag is specified.")

    return p


def run(command, env=None):
    """Run a command specified as input via ``subprocess.run()``.

    Parameters
    ----------
    command : string
        String containing the command to be executed (required)

    env : Instance(os.environ)
        Specify a custom os.environ
    """
    merged_env = os.environ

    if env is not None:
        merged_env.update(env)

    process = subprocess.Popen(command, shell=True, env=merged_env)
    outs, errs = process.communicate()
    return outs, errs


def run_quickshear_deface(input_image, input_mask, output_image):
    """Create the quickshear command and executes it via the run() function.

    Parameters
    ----------
    input_image : path
        Input image to be defaced
    input_masks : path
        Input mask used by quickshear
    output_image : path
        Output defaced image
    """
    cmd = f'quickshear {input_image} {input_mask} {output_image}'
    print(f'    cmd: {cmd}')
    # cmd = f'quickshear -h'
    outs, errs = run(command=cmd)
    return outs, errs


def run_quickshear_deface_file_list(t1w_files, mask_file):
    """Execute the quickshear command using a common mask on a list of files

    Parameters
    ----------
    t1w_files : list of bids.BIDSImageFile
        List of T1w images returns by BIDSLayout.get()
    mask_file : bids.BIDSImageFile
        Mask file to use
    """
    for file in t1w_files:
        print(f'  * Deface {file.filename} with {mask_file.filename}...')
        outs, errs = run_quickshear_deface(input_image=file.path,
                                           input_mask=mask_file.path,
                                           output_image=file.path)
        print(f'    Output: {outs}')
        print(f'    Error(s): {errs}')


def resample_mask(input_mask, input_image, output_mask):
    """Resample input mask to the space of the input image.

    It calls fslpy resample_image via run() function.

    Parameters
    ----------
    input_masks : path
        Input mask used by quickshear

    input_image : path
        Input image to be defaced

    output_mask : path
        Output resampled mask
    """
    cmd = f'resample_image -i nearest --reference {input_image} {input_mask} {output_mask}'
    print(f'    cmd: {cmd}')
    # cmd = f'quickshear -h'
    outs, errs = run(command=cmd)
    return outs, errs


def resample_to_reference_using_ants(input_fixed, input_moving, input_mask, output_mask):
    """Resample input mask to the space of the input image.

    It co-registered rigidly resampled and raw T1w scans and apply the estimated transform.

    Parameters
    ----------
    input_fixed : path
        Fixed input image (Target space)

    input_moving : path
        Moving input image

    input_masks : path
        Maks of the moving input image

    output_mask : path
        Output resampled mask in the target space
    """
    ants_outputs = ants.registration(fixed=ants.image_read(input_fixed),
                                     moving=ants.image_read(input_moving),
                                     type_of_transform='QuickRigid',
                                     initial_transform=None,
                                     outprefix='',
                                     mask=None,
                                     grad_step=0.2,
                                     flow_sigma=3,
                                     total_sigma=0,
                                     aff_metric='mattes',
                                     aff_sampling=32,
                                     syn_metric='mattes',
                                     syn_sampling=32,
                                     reg_iterations=(40, 20, 0),
                                     verbose=False)

    resampled_mask = ants.apply_transforms(fixed=ants.image_read(input_fixed),
                                           moving=ants.image_read(input_mask),
                                           transformlist=ants_outputs['fwdtransforms'],
                                           interpolator='nearestNeighbor')

    resampled_mask.to_filename(output_mask)

    return True


def create_report(bids_dir, output_dir, files, report_name='report.pdf'):
    """Generate a report with cuts of a list of images.

    Parameters
    ----------
    bids_dir : string
        BIDS root dataset directory

    output_dir : string
        Output directory where the report is saved

    report_name : string
        Custom report name
        (Default: 'report.pdf')

    files: list of bids.BIDSImageFile
        List of images to show in the report
    """
    # Create reportlab canvas where figures for reporting are generated
    c = canvas.Canvas(os.path.join(output_dir, report_name), pagesize=A4)
    page_width, page_height = A4

    print("Page size : %s x %s" % (page_width, page_height))

    startY = 841.89 - 50

    today = datetime.date.today()
    today = today.strftime('%d, %b %Y')

    text = f'Quickshear defacing report (Date: {today})'
    text_width = stringWidth(text, 'Helvetica', 12)
    c.drawString((page_width - text_width) / 2.0, startY, text)

    text = f'BIDS root directory: {bids_dir}'
    text_width = stringWidth(text, 'Helvetica', 12)
    c.drawString((page_width - text_width) / 2.0, startY - 20, text)

    # Initialize a progress bar for report generation
    bar = Bar('Processing', max=len(files))

    # For each image generate the cuts using nilearn.plot_anat() function
    offset = 0
    subjid_old = None
    for file in files:
        # Get the subject label from the filename
        subjid = file.filename.split('_')[0].split('-')[1]

        if subjid_old is None:
            text = f'Subject label: {subjid}'
            text_width = stringWidth(text, 'Helvetica', 12)
            c.drawString((page_width - text_width) / 2.0, startY - 50, text)
        else:
            if subjid != subjid_old:
                c.showPage()
                offset = 0

                text = f'Subject label: {subjid}'
                text_width = stringWidth(text, 'Helvetica', 12)
                c.drawString((page_width - text_width) / 2.0, startY - 50, text)

        fig = figure(figsize=(18, 6))
        plot_anat(
            anat_img=file.path,
            cut_coords=(0,0,0),
            output_file=None,
            display_mode='ortho',
            figure=fig, axes=None,
            title=f'Image: {file.path}',
            annotate=True,
            threshold=None,
            draw_cross=True,
            black_bg=True,
            dim='auto',
            vmin=None,
            vmax=None)

        imgdata = io.BytesIO()
        fig.savefig(imgdata, dpi=100, format='png')
        imgdata.seek(0)  # rewind the data

        Image = ImageReader(imgdata)
        img_height = 2 * inch
        img_width = 3 * img_height
        posY = startY - 30 - 2.5 * inch - offset
        c.drawImage(Image, ((page_width - img_width) / 2.0), posY, img_width, img_height)

        offset += 2.15 * inch

        subjid_old = subjid
        bar.next()

        #if posY - offset + 10 * inch < 0:
        #    c.showPage()
        #    offset = 0
    c.save()
    bar.finish()


def main(args):
    """Main function of the script that takes the arguments parsed as input.

    It runs Quickshear deface on each raw and derived T1w MRI using
    the brain mask computed by Connectome Mapper 3

    Parameters
    ----------
    args: Dict
        Dictionary of argument keys and value of the parsed input arguments
        of this script
    """

    layout = bids.BIDSLayout(args.bids_dir, derivatives=True)
    subjects = layout.get_subjects()
    scopes = ['raw', 'derivatives']
    files = []

    for subjid in subjects:
        print(f'Process sub-{subjid}')
        t1w_raw_files = layout.get(subject=subjid,
                                   suffix='T1w',
                                   extension='.nii.gz',
                                   scope=scopes[0])
        # print(f'    {t1w_raw_files}\n')
        brain_masks_files = layout.get(subject=subjid,
                                       datatype='anat',
                                       suffix='mask',
                                       extension='.nii.gz',
                                       scope=scopes[1])
        # print(f'    {brain_masks_files}\n')
        t1w_deriv_files = layout.get(subject=subjid,
                                     suffix='T1w',
                                     extension='.nii.gz',
                                     scope=scopes[1])

        t1w_deriv_orig_space_files = [f for f in t1w_deriv_files if ('space-DWI' not in f.filename) and ('desc-brain' not in f.filename)]
        # print(f'	{t1w_deriv_orig_space_files}\n')
        t1w_deriv_dwi_space_files = layout.get(subject=subjid,
                                               suffix='T1w',
                                               space='DWI',
                                               extension='.nii.gz',
                                               scope=scopes[1])
        t1w_deriv_dwi_space_files = [f for f in t1w_deriv_dwi_space_files if 'desc-brain' not in f.filename]

        # print(f'    {t1w_deriv_dwi_space_files}\n')

        # t1w_raw_img = nibabel.load(t1w_raw_files[0].path)
        # voxel_size = list(t1w_raw_img.header.get_zooms())

        # mask_orig_img = nibabel.load(brain_masks_files[0].path)
        # mask_raw_img = nibabel.processing.resample_from_to(mask_orig_img, t1w_raw_img, order=0)

        # Generate output filename sub-20_desc-brain_mask.nii.gz
        subj, desc, tail = brain_masks_files[0].filename.split('_')
        mask_raw_path = os.path.join(brain_masks_files[0].dirname, subj + '_space-orig_' + desc + '_' + tail)
        print(f'    * Resample and save brain mask as:\n      {mask_raw_path}')
        # resample_mask(brain_masks_files[0].path, t1w_raw_files[0].path, mask_raw_path)
        resample_to_reference_using_ants(input_fixed=t1w_raw_files[0].path,
                                         input_moving=t1w_deriv_orig_space_files[0].path,
                                         input_mask=brain_masks_files[0].path,
                                         output_mask=mask_raw_path)

        #nibabel.save(mask_raw_img, mask_raw_path)
        mask_raw_file = bids.layout.models.BIDSFile(mask_raw_path)

        # Deface images
        if args.no_deface is not True:
            run_quickshear_deface_file_list(t1w_raw_files, mask_file=mask_raw_file)
            run_quickshear_deface_file_list(t1w_deriv_orig_space_files, mask_file=brain_masks_files[0])
            run_quickshear_deface_file_list(t1w_deriv_dwi_space_files, mask_file=brain_masks_files[1])

        # Combine the list of files and create a PDF report
        files += t1w_raw_files + t1w_deriv_dwi_space_files + t1w_deriv_orig_space_files

    if args.no_report is not True:
        print(files)
        create_report(
            bids_dir=args.bids_dir,
            output_dir=os.path.join(args.bids_dir, "code"),
            files=files,
            report_name=f'dataset_quickshear_defacing_report.pdf')


if __name__ == '__main__':
    parser = create_parser()
    args = parser.parse_args()
    print(args.bids_dir)
    main(args)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
big-effort enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant