From bf0a16813a3424cc4faba6831c8b6726042c9de0 Mon Sep 17 00:00:00 2001
From: <>
Date: Wed, 19 Jun 2024 22:50:31 +0200
Subject: [PATCH] Update documentation
---
.../Tutorial5_Reconstruction_teacher.ipynb | 1 +
.../Tutorial6_Registration_student.ipynb | 1 +
genindex.html | 3 +-
index.html | 3 +-
.../Tutorial1_NumpySimpleITK_student.html | 3 +-
.../Tutorial1_NumpySimpleITK_teacher.html | 3 +-
...2_ConvolutionalNeuralNetworks_student.html | 3 +-
...2_ConvolutionalNeuralNetworks_teacher.html | 3 +-
.../Tutorial3_Segmentation_student.html | 3 +-
.../Tutorial3_Segmentation_teacher.html | 3 +-
.../Tutorial4_Generative_Models_student.html | 3 +-
.../Tutorial4_Generative_Models_teacher.html | 7 +-
.../Tutorial5_Reconstruction_student.html | 15 +-
.../Tutorial5_Reconstruction_teacher.html | 981 ++++++++++++
.../Tutorial6_Registration_student.html | 1373 +++++++++++++++++
objects.inv | 6 +-
search.html | 3 +-
searchindex.js | 2 +-
18 files changed, 2386 insertions(+), 30 deletions(-)
create mode 100644 _sources/notebooks/Tutorial5_Reconstruction/Tutorial5_Reconstruction_teacher.ipynb
create mode 100644 _sources/notebooks/Tutorial6_Registration/Tutorial6_Registration_student.ipynb
create mode 100644 notebooks/Tutorial5_Reconstruction/Tutorial5_Reconstruction_teacher.html
create mode 100644 notebooks/Tutorial6_Registration/Tutorial6_Registration_student.html
diff --git a/_sources/notebooks/Tutorial5_Reconstruction/Tutorial5_Reconstruction_teacher.ipynb b/_sources/notebooks/Tutorial5_Reconstruction/Tutorial5_Reconstruction_teacher.ipynb
new file mode 100644
index 0000000..42dcd95
--- /dev/null
+++ b/_sources/notebooks/Tutorial5_Reconstruction/Tutorial5_Reconstruction_teacher.ipynb
@@ -0,0 +1 @@
+{"cells": [{"cell_type": "markdown", "id": "73f469b6", "metadata": {"user_expressions": []}, "source": ["# Tutorial 5\n", "## June 4, 2024\n", "In the previous tutorials, you have familiarized yourself with PyTorch, MONAI, and Weights & Biases. In last week's lecture, you have learned about registration. In this tutorial, you will develop, train, and evaluate a CNN for denoising of (synthetic) CT images. "]}, {"cell_type": "markdown", "id": "bf679d08", "metadata": {"user_expressions": []}, "source": ["First, let's take care of the necessities:\n", "- If you're using Google Colab, make sure to select a GPU Runtime.\n", "- Connect to Weights & Biases using the code below.\n", "- Install a few libraries that we will use in this tutorial."]}, {"cell_type": "code", "execution_count": null, "id": "3d04ca74", "metadata": {}, "outputs": [], "source": ["import os\n", "import wandb\n", "\n", "os.environ[\"KMP_DUPLICATE_LIB_OK\"]=\"TRUE\"\n", "wandb.login()"]}, {"cell_type": "code", "execution_count": null, "id": "e38ae6e8", "metadata": {}, "outputs": [], "source": ["!pip install dival\n", "!pip install kornia"]}, {"cell_type": "markdown", "id": "2170d4ce", "metadata": {"user_expressions": []}, "source": ["## Reconstruction\n", "In this tutorial, you will reconstruct CT images. To not use too much disk storage, we will synthetise images on the fly using the Deep Inversion Validation Library [(dival)](https://github.com/jleuschn/dival). These are 2D images with $128\\times 128$ pixels that contain a random number of ellipses with random sizes and random intensities. \n", "\n", "First, make a dataset of ellipses. This will make an object that we can call for images using a generator. Next, we take a look at what this dataset contains. We will use the `generator` to ask for a sample. Each sample contains a sinogram and a ground truth (original) synthetic image that we can visualize. You may recall from the lecture that the sinogram is made up of integrals along projections. The horizontal axis in the sinogram corresponds to the location $s$ along the detector, the vertical axis to the projection angle $\\theta$.\n", "\n", ""]}, {"cell_type": "code", "execution_count": null, "id": "ddf54472", "metadata": {}, "outputs": [], "source": ["import dival\n", "\n", "dataset = dival.get_standard_dataset('ellipses', impl='skimage')\n", "dat_gen = dataset.generator(part='train')"]}, {"cell_type": "markdown", "id": "c0bc0543", "metadata": {"user_expressions": []}, "source": ["Run the cell below to show a sinogram and image in the dataset."]}, {"cell_type": "code", "execution_count": null, "id": "27ee363d", "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "# Get a sample from the generator\n", "sinogram, ground_truth = next(dat_gen)\n", "fig, axs = plt.subplots(1, 2, figsize=(10, 5))\n", "\n", "# Show the sinogram\n", "axs[0].imshow(sinogram, cmap='gray', extent=[0, 183, -90, 90])\n", "axs[0].set_title('Sinogram')\n", "axs[0].set_xlabel('$s$')\n", "axs[0].set_ylabel('$\\Theta$')\n", "\n", "# Show the ground truth image\n", "axs[1].imshow(ground_truth, cmap='gray')\n", "axs[1].set_title('Ground truth')\n", "axs[1].set_xlabel('$x$')\n", "axs[1].set_ylabel('$y$')\n", "plt.show() "]}, {"cell_type": "markdown", "id": "553cc7b6", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "What kind of CT reconstruction problem is this? Limited-view or sparse-angle CT? Why?\n", ":::"]}, {"cell_type": "markdown", "id": "b560823e", "metadata": {"tags": ["teacher"], "user_expressions": []}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "This is a sparse-angle CT recontruction problem. The view spans 180 degrees, but the number of angles is low.\n", ":::"]}, {"cell_type": "markdown", "id": "cf6a41d6", "metadata": {"user_expressions": []}, "source": ["Not only does the sinogram contain few angles, it also contains added white noise. If we simply backproject the sinogram to the image domain we end up with a low-quality image. Let's give it a try using the standard [Filtered Backprojection](https://en.wikipedia.org/wiki/Radon_transform#Reconstruction_approaches) (FBP) algorithm for CT and its implementation in [scikit-image](https://scikit-image.org/)."]}, {"cell_type": "code", "execution_count": null, "id": "e85b33da", "metadata": {}, "outputs": [], "source": ["import skimage.transform as sktr\n", "\n", "# Get a sample from the generator\n", "sinogram, ground_truth = next(dat_gen)\n", "sinogram = np.asarray(sinogram).transpose()\n", "\n", "# This defines the projectiona angles\n", "theta = np.linspace(-90., 90., sinogram.shape[1], endpoint=True)\n", "\n", "# Perform FBP\n", "fbp_recon = sktr.iradon(sinogram, theta=theta, filter_name='ramp')[28:-27, 28:-27]\n", "fig, axs = plt.subplots(1, 3, figsize=(12, 4))\n", "axs[0].imshow(sinogram.transpose(), cmap='gray', extent=[0, 183, -90, 90])\n", "axs[0].set_title('Sinogram')\n", "axs[0].set_xlabel('$s$')\n", "axs[0].set_ylabel('$\\Theta$')\n", "axs[1].imshow(ground_truth, cmap='gray', clim=[0, 1])\n", "axs[1].set_title('Ground truth')\n", "axs[1].set_xlabel('$x$')\n", "axs[1].set_ylabel('$y$')\n", "axs[2].imshow(fbp_recon, cmap='gray', clim=[0, 1])\n", "axs[2].set_title('FBP')\n", "axs[2].set_xlabel('$x$')\n", "axs[2].set_ylabel('$y$')\n", "plt.show()"]}, {"cell_type": "markdown", "id": "3799bea2", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "What do you think of the quality of the reconstructed FBP algorithm? Use the cell below to quantify the similarity between the images using the structural similarity index (SSIM). Does this reflect your intuition? Also compute the PSNR using the [`peak_signal_noise_ratio`](https://scikit-image.org/docs/stable/api/skimage.metrics.html#skimage.metrics.peak_signal_noise_ratio) method in `scikit-image`."]}, {"cell_type": "markdown", "id": "155ad8a8", "metadata": {"tags": ["teacher"], "user_expressions": []}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "```python \n", "import skimage.metrics as skme\n", "\n", "print('SSIM = {:.2f}'.format(skme.structural_similarity(np.asarray(ground_truth), fbp_recon, data_range=np.max(ground_truth)-np.min(ground_truth))))\n", "print('PSNR = {:.2f}'.format(skme.peak_signal_noise_ratio(np.asarray(ground_truth), fbp_recon)))\n", "```\n", ":::"]}, {"cell_type": "markdown", "id": "8e3c231b", "metadata": {"user_expressions": []}, "source": ["### Datasets and dataloaders\n", "\n", "Our (or your) goal now is to obtain high(er) quality reconstructed images based on the sinogram measurements. As you have seen in the lecture, this can be done in four ways:\n", "1. Train a reconstruction method that directly maps from the measurement (sinogram) domain to the image domain.\n", "2. **Preprocessing** Clean up the sinogram using a neural network, then backproject to the image domain.\n", "3. **Postprocessing** First backproject to the image domain, then improve the reconstruction using a neural network.\n", "4. Iterative methods that integrate data consistency.\n", "\n", "Here, we will follow the third approach, postprocessing. We create reconstructions from the generated sinograms using filtered backprojection and use a neural network to learn corrections on this FBP image and improve the reconstruction, as shown in the image below. The data that we need for training this network is the reconstructions from FBP, and the ground-truth reconstructions from the dival dataset. \n", "\n", "\n", "We will make a training dataset of 512 samples from the ellipses dival dataset that we store in a MONAI `DataSet`. The code below does this in four steps:\n", "1. Create a `dival` generator that creates sinograms and ground-truth reconstructions.\n", "2. Make a dictionary (like we did in the previous tutorial) that contains the ground-truth reconstructions and the reconstructions constructed by FBP as separate keys.\n", "3. Define the transforms for the data (also like the previous tutorial). In this case we require an additional 'channels' dimension, as that is what the neural network expects. We will not make use of extra data augmentation.\n", "4. Construct the dataset using the dictionary and the defined transform."]}, {"cell_type": "code", "execution_count": null, "id": "3642758c", "metadata": {}, "outputs": [], "source": ["import tqdm\n", "import monai\n", "\n", "theta = np.linspace(-90., 90., sinogram.shape[1], endpoint=True)\n", "\n", "# Make a generator for the training part of the dataset\n", "train_gen = dataset.generator(part='train')\n", "train_samples = []\n", "\n", "# Make a list of (in this case) 512 random training samples. We store the filtered backprojection (FBP) and ground truth image\n", "# in a dictionary for each sample, and add these to a list.\n", "for ns in tqdm.tqdm(range(512)):\n", " sinogram, ground_truth = next(train_gen)\n", " sinogram = np.asarray(sinogram).transpose()\n", " fbp_recon = sktr.iradon(sinogram, theta=theta, filter_name='ramp')[28:-27, 28:-27]\n", " train_samples.append({'fbp': fbp_recon, 'ground_truth': np.asarray(ground_truth)})\n", "\n", "# You can add or remove transforms here\n", "train_transform = monai.transforms.Compose([\n", " monai.transforms.AddChanneld(keys=['fbp', 'ground_truth'])\n", "]) \n", "\n", "# Use the list of dictionaries and the transform to initialize a MONAI CacheDataset\n", "train_dataset = monai.data.CacheDataset(train_samples, transform=train_transform) "]}, {"cell_type": "markdown", "id": "7f7e9131", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Also make a validation dataset and call it `val_dataset`. This dataset can be smaller, e.g., 64 or 128 samples.\n", ":::"]}, {"cell_type": "markdown", "id": "6743dc66", "metadata": {"tags": ["teacher"]}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "```python\n", "val_gen = dataset.generator(part='validation')\n", "val_samples = []\n", "val_transform = monai.transforms.Compose([\n", " monai.transforms.AddChanneld(keys=['fbp', 'ground_truth'])\n", "])\n", "for ns in tqdm.tqdm(range(128)):\n", " sinogram, ground_truth = next(val_gen)\n", " sinogram = np.asarray(sinogram).transpose()\n", " fbp_recon = sktr.iradon(sinogram, theta=theta, filter_name='ramp')[28:-27, 28:-27]\n", " val_samples.append({'fbp': fbp_recon, 'ground_truth': np.asarray(ground_truth)})\n", "val_dataset = monai.data.CacheDataset(val_samples, transform=val_transform) \n", "```\n", ":::"]}, {"cell_type": "markdown", "id": "2a563840", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Now, make a dataloader for both the validation and training data, called `train_loader` and `validation_loader`, that we can use for sampling batches during training of the network. Give them a reasonable batch size, e.g., 16.\n", ":::"]}, {"cell_type": "markdown", "id": "085d8a37", "metadata": {"tags": ["teacher"], "user_expressions": []}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "\n", "```python\n", "train_loader = monai.data.DataLoader(train_dataset, batch_size=16)\n", "validation_loader = monai.data.DataLoader(val_dataset, batch_size=16)\n", "```\n", ":::"]}, {"cell_type": "markdown", "id": "69da2563", "metadata": {"user_expressions": []}, "source": ["### Model\n", "Now that we have datasets and dataloaders, the next step is to define a model, optimizer and criterion. Because we want to improve the FBP-reconstructed image, we are dealing with an image-to-image task. A standard U-Net as implemented in MONAI is therefore a good starting point. First, make sure that you are using the GPU (CUDA), otherwise training will be extremely slow."]}, {"cell_type": "code", "execution_count": null, "id": "ec0b16bf", "metadata": {}, "outputs": [], "source": ["import torch\n", "\n", "if torch.cuda.is_available():\n", " device = torch.device(\"cuda\")\n", "elif torch.backends.mps.is_available():\n", " device = torch.device(\"mps\")\n", "else:\n", " device = \"cpu\"\n", "print(f'The used device is {device}')"]}, {"cell_type": "markdown", "id": "95a94400", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Initialize a U-Net with the correct settings, e.g. channels and dimensions, and call it `model`. Here, it's convenient to use the [`BasicUNet`](https://docs.monai.io/en/stable/networks.html#monai.networks.nets.BasicUNet) as implemented in MONAI.\n", ":::"]}, {"cell_type": "markdown", "id": "665c7ee9", "metadata": {"tags": ["teacher"], "user_expressions": []}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "```python\n", "model = monai.networks.nets.BasicUNet(\n", " spatial_dims=2,\n", " out_channels=1\n", ").to(device)\n", "\n", "# model = monai.networks.nets.SegResNet(\n", "# spatial_dims=2,\n", "# out_channels=1\n", "# ).to(device)\n", "```\n", ":::"]}, {"cell_type": "markdown", "id": "08ca26ef", "metadata": {}, "source": ["### Loss function\n", "An important aspect is the loss function that you will use to optimize the model. The problem that we are trying to solve using a neural network is a *regression* problem, which differs from the *classification* approach we covered in the segmentation tutorial. Instead of classifying each pixel as a certain class, we alter their intensities to obtain a better overall reconstruction of the image. \n", "\n", "Because this task is substantially different, we need to change our loss function. In the previous tutorial we used the Dice loss, which measures the overlap for each of the classes to segment. In this case, an L2 (mean squared error) or L1 (mean average error) loss suits our objective. Alternatively, we can use a loss that aims to maximize the structural similarity (SSIM). For this, we use the [kornia](https://kornia.readthedocs.io/en/latest/) library."]}, {"cell_type": "code", "execution_count": null, "id": "8b8b749d", "metadata": {}, "outputs": [], "source": ["import kornia \n", "\n", "# Three loss functions, turn them on or off by commenting\n", "\n", "loss_function = torch.nn.MSELoss()\n", "# loss_function = torch.nn.L1Loss()\n", "# loss_function = kornia.losses.SSIMLoss(window_size=3)"]}, {"cell_type": "markdown", "id": "8d806126", "metadata": {}, "source": ["As in previous tutorials, we use an adaptive SGD (Adam) optimizer to train our network. This tutorial, we add a [learning rate scheduler](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html). This scheduler lowers the learning rate every *step_size* steps, meaning that the optimizer will take smaller steps in the direction of the gradient after a set amount of epochs. Therefore, the optimizer can potentially find a better local minimum for the weights of the neural network."]}, {"cell_type": "code", "execution_count": null, "id": "5fe55ae0", "metadata": {}, "outputs": [], "source": ["optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\n", "scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.1)"]}, {"cell_type": "markdown", "id": "32e251c7", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Complete the code below and train the U-Net.\n", "\n", "What does the model learn? Look carefully at how we determine the output of the model. Can you describe what happens in the following line: `outputs = model(batch_data['fbp'].float().to(device)) + batch_data[\"fbp\"].float().to(device)`?\n", ":::"]}, {"cell_type": "markdown", "id": "114aceb6", "metadata": {"tags": ["teacher"]}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "```python\n", "from tqdm.notebook import tqdm\n", "import wandb\n", "from skimage.metrics import structural_similarity as ssim\n", "\n", "\n", "run = wandb.init(\n", " project='tutorial4_reconstruction',\n", " config={\n", " 'loss function': str(loss_function), \n", " 'lr': optimizer.param_groups[0][\"lr\"],\n", " 'batch_size': train_loader.batch_size,\n", " }\n", ")\n", "# Do not hesitate to enrich this list of settings to be able to correctly keep track of your experiments!\n", "# For example you should include information on your model architecture\n", "\n", "run_id = run.id # We remember here the run ID to be able to write the evaluation metrics\n", "\n", "def log_to_wandb(epoch, train_loss, val_loss, batch_data, outputs):\n", " \"\"\" Function that logs ongoing training variables to W&B \"\"\"\n", "\n", " # Create list of images that have segmentation masks for model output and ground truth\n", " # log_imgs = [wandb.Image(PIL.Image.fromarray(img.detach().cpu().numpy())) for img in outputs]\n", " val_ssim = []\n", " for im_id in range(batch_data['ground_truth'].shape[0]):\n", " val_ssim.append(ssim(batch_data['ground_truth'].detach().cpu().numpy()[im_id, 0, :, :].squeeze(), \n", " outputs.detach().cpu().numpy()[im_id, 0, :, :].squeeze() ))\n", " val_ssim = np.mean(np.asarray(val_ssim))\n", " # Send epoch, losses and images to W&B\n", " wandb.log({'epoch': epoch, 'train_loss': train_loss, 'val_loss': val_loss, 'val_ssim': val_ssim}) \n", " \n", "for epoch in tqdm(range(75)):\n", " model.train() \n", " epoch_loss = 0\n", " step = 0\n", " for batch_data in train_loader: \n", " step += 1\n", " optimizer.zero_grad()\n", " outputs = model(batch_data[\"fbp\"].float().to(device)) + batch_data[\"fbp\"].float().to(device)\n", " loss = loss_function(outputs, batch_data[\"ground_truth\"].to(device))\n", " loss.backward()\n", " optimizer.step()\n", " epoch_loss += loss.item()\n", " train_loss = epoch_loss/step\n", " # validation part\n", " step = 0\n", " val_loss = 0\n", " for batch_data in validation_loader:\n", " step += 1\n", " model.eval()\n", " outputs = model(batch_data['fbp'].float().to(device)) + batch_data[\"fbp\"].float().to(device)\n", " loss = loss_function(outputs, batch_data['ground_truth'].to(device)) \n", " val_loss+= loss.item()\n", " val_loss = val_loss / step\n", " log_to_wandb(epoch, train_loss, val_loss, batch_data, outputs)\n", " scheduler.step()\n", "\n", "# Store the network parameters \n", "torch.save(model.state_dict(), r'trainedUNet.pt')\n", "run.finish()\n", "```\n", ":::"]}, {"cell_type": "markdown", "id": "78f44b00", "metadata": {}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Now make a `DataSet` and `DataLoader` for the test set. Just a handful of images should be enough.\n", ":::"]}, {"cell_type": "markdown", "id": "62426ac7", "metadata": {"tags": ["teacher"]}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "```python\n", "import tqdm\n", "\n", "test_gen = dataset.generator(part='test')\n", "test_samples = []\n", "test_transform = monai.transforms.Compose([\n", " monai.transforms.AddChanneld(keys=['fbp', 'ground_truth'])\n", "])\n", "for ns in tqdm.tqdm(range(4)):\n", " sinogram, ground_truth = next(test_gen)\n", " sinogram = np.asarray(sinogram).transpose()\n", " fbp_recon = sktr.iradon(sinogram, theta=theta, filter_name='ramp')[28:-27, 28:-27]\n", " test_samples.append({'sinogram': sinogram, 'fbp': fbp_recon, 'ground_truth': np.asarray(ground_truth)})\n", "test_dataset = monai.data.CacheDataset(test_samples, transform=val_transform)\n", "\n", "test_loader = monai.data.DataLoader(test_dataset, batch_size=1)\n", "```\n", ":::"]}, {"cell_type": "markdown", "id": "320f6993", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Visualize a number of reconstructions from the neural network and compare them to the fbp reconstructed images, using the code below. The performance of the network is evaluated using the structural similarity [function](https://scikit-image.org/docs/stable/api/skimage.metrics.html#skimage.metrics.structural_similarity) in scikit-image. Does the neural network improve this metric a lot compared to the filtered back projection?\n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "edf5dbe0", "metadata": {}, "outputs": [], "source": ["model.eval()\n", "\n", "for test_sample in test_loader:\n", " output = model(test_sample['fbp'].to(device)) + test_sample['fbp'].to(device)\n", " output = output.detach().cpu().numpy()[0, 0, :, :].squeeze()\n", " ground_truth = test_sample['ground_truth'][0, 0, :, :].squeeze()\n", " fbp_recon = test_sample['fbp'][0, 0, :, :].squeeze()\n", " fig, axs = plt.subplots(1, 3, figsize=(12, 4))\n", " axs[0].imshow(fbp_recon, cmap='gray', clim=[0, 1])\n", " axs[0].set_title('FBP SSIM={:.2f}'.format(ssim(ground_truth.cpu().numpy(), fbp_recon.cpu().numpy())))\n", " axs[0].set_xlabel('$x$')\n", " axs[0].set_ylabel('$y$')\n", " axs[1].imshow(ground_truth, cmap='gray', clim=[0, 1])\n", " axs[1].set_title('Ground truth')\n", " axs[1].set_xlabel('$x$')\n", " axs[1].set_ylabel('$y$')\n", " axs[2].imshow(output, cmap='gray', clim=[0, 1])\n", " axs[2].set_title('CNN SSIM={:.2f}'.format(ssim(ground_truth.cpu().numpy(), output)))\n", " axs[2].set_xlabel('$x$')\n", " axs[2].set_ylabel('$y$')\n", " plt.show() "]}, {"cell_type": "markdown", "id": "b9496534", "metadata": {"tags": ["teacher"], "user_expressions": []}, "source": [":::{admonition} Answer key\n", ":class: tip\n", "Some observations that you could make: \n", "- The SSIM is definitely improved compared to the standard filtered back projection (FBP). CNN results should be in the order of ~0.8 SSIM.\n", "- The output images of the CNN are less noisy than the FBP reconstructions. However, they're also a bit more blotchy/cartoonish if you use the CNN.\n", ":::"]}, {"cell_type": "markdown", "id": "7f30cbb0", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Instead of a U-Net, try a different model, e.g., a [SegResNet](https://docs.monai.io/en/stable/networks.html#segresnet) in MONAI.\n", "Evaluate how the different loss functions affect the performance of the network. Notes that the SSIM on the validation set is also written to Weights & Biases during training. Which loss leads to the best SSIM scores? Which loss results in the worst SSIM scores?\n", ":::"]}, {"cell_type": "markdown", "id": "bcb156ab", "metadata": {"tags": ["teacher"], "user_expressions": []}, "source": [":::{admonition} Answer key\n", ":class: seealso\n", "In general, using an SSIM loss will lead to better SSIM scores. The L1 loss is also expected to lead to better then the MSE loss, as it's less susceptible to outliers and will smooth the resulting images less.\n", ":::"]}, {"cell_type": "markdown", "id": "5ffb45f5", "metadata": {"lines_to_next_cell": 0, "user_expressions": []}, "source": ["## From post-processing to pre-processing\n", "So far, you have used a post-processing approach for reconstruction. In the lecture, we have discussed an alternative *pre-processing* approach, in which the sinogram image is improved before FBP. This additional exercise is **entirely optional**, but you could try to turn the current model into such a model, and see if the results that you get are better or worse than the results obtained so far. Good luck!"]}, {"cell_type": "code", "execution_count": null, "id": "377375b6", "metadata": {"lines_to_next_cell": 2}, "outputs": [], "source": []}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 5}
\ No newline at end of file
diff --git a/_sources/notebooks/Tutorial6_Registration/Tutorial6_Registration_student.ipynb b/_sources/notebooks/Tutorial6_Registration/Tutorial6_Registration_student.ipynb
new file mode 100644
index 0000000..08c07a9
--- /dev/null
+++ b/_sources/notebooks/Tutorial6_Registration/Tutorial6_Registration_student.ipynb
@@ -0,0 +1 @@
+{"cells": [{"cell_type": "markdown", "id": "04f81b71", "metadata": {"user_expressions": []}, "source": ["# Tutorial 6\n", "## June 18, 2024\n", "In this tutorial you will develop, train, and evaluate a CNN that learns to perform deformable image registration in chest X-ray images. "]}, {"cell_type": "markdown", "id": "6f65ab1f", "metadata": {"user_expressions": []}, "source": ["First, let's take care of the necessities:\n", "- If you're using Google Colab, make sure to select a GPU Runtime.\n", "- Connect to Weights & Biases using the code below.\n", "- Install a few libraries that we will use in this tutorial."]}, {"cell_type": "code", "execution_count": null, "id": "51c2a8d4", "metadata": {}, "outputs": [], "source": ["import os\n", "import wandb\n", "\n", "os.environ[\"KMP_DUPLICATE_LIB_OK\"]=\"TRUE\"\n", "wandb.login()"]}, {"cell_type": "code", "execution_count": null, "id": "11f1afed", "metadata": {}, "outputs": [], "source": ["!pip install dival\n", "!pip install kornia\n", "!pip install monai"]}, {"cell_type": "markdown", "id": "6d2c3686", "metadata": {"user_expressions": []}, "source": ["## Part 1 - Registration"]}, {"cell_type": "code", "execution_count": null, "id": "47c4e491", "metadata": {}, "outputs": [], "source": ["import monai\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import torch\n", "import wandb"]}, {"cell_type": "markdown", "id": "cf58bbec", "metadata": {"user_expressions": []}, "source": ["We will register chest X-ray images. We will reuse the data of **Tutorial 3**. As always, we first set the paths. This should be the path ending in 'ribs'. If you don't have the data set anymore, you can download it using the lines below:"]}, {"cell_type": "code", "execution_count": null, "id": "9c1a55ec", "metadata": {}, "outputs": [], "source": ["!wget https://surfdrive.surf.nl/files/index.php/s/Y4psc2pQnfkJuoT/download -O Tutorial_3.zip\n", "!unzip -qo Tutorial_3.zip\n", "data_path = \"ribs\""]}, {"cell_type": "code", "execution_count": null, "id": "8d4ec39d", "metadata": {}, "outputs": [], "source": ["# ONLY IF YOU USE JUPYTER: ADD PATH \u2328\ufe0f\n", "data_path = r'/Users/jmwolterink/Downloads/ribs'# WHEREDIDYOUPUTTHEDATA?"]}, {"cell_type": "code", "execution_count": null, "id": "a0adcf56", "metadata": {}, "outputs": [], "source": ["# ONLY IF YOU USE COLAB: ADD PATH \u2328\ufe0f\n", "from google.colab import drive\n", "\n", "drive.mount('/content/drive')\n", "data_path = r'/content/drive/My Drive/Tutorial3'"]}, {"cell_type": "code", "execution_count": null, "id": "3f58a57c", "metadata": {}, "outputs": [], "source": ["# check if data_path exists:\n", "import os\n", "\n", "if not os.path.exists(data_path):\n", " print(\"Please update your data path to an existing folder.\")\n", "elif not set([\"train\", \"val\", \"test\"]).issubset(set(os.listdir(data_path))):\n", " print(\"Please update your data path to the correct folder (should contain train, val and test folders).\")\n", "else:\n", " print(\"Congrats! You selected the correct folder :)\")"]}, {"cell_type": "markdown", "id": "6b8112de", "metadata": {}, "source": ["### Data management\n", "\n", "In this part we prepare all the tools needed to load and visualize our samples. One thing we *could* do is perform **inter**-patient registration, i.e., register two chest X-ray images of different patients. However, this is a very challenging problem. Instead, to make our life a bit easier, we will perform **intra**-patient registration: register two images of the same patient. For each patient, we make a synthetic moving image by applying some random elastic deformations. To build this data set, we we used the [Rand2DElasticd](https://docs.monai.io/en/stable/transforms.html#rand2delastic) transform on both the image and the mask. We will use a neural network to learn the deformation field between the fixed image and the moving image.\n", ""]}, {"cell_type": "markdown", "id": "69822e22", "metadata": {"user_expressions": []}, "source": ["Similarly as in **Tutorial 3**, make a dictionary of the image file names."]}, {"cell_type": "code", "execution_count": null, "id": "2c0d9aff", "metadata": {}, "outputs": [], "source": ["import os\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import glob\n", "import monai\n", "from PIL import Image\n", "import torch\n", "\n", "def build_dict_ribs(data_path, mode='train'):\n", " \"\"\"\n", " This function returns a list of dictionaries, each dictionary containing the keys 'img' and 'mask' \n", " that returns the path to the corresponding image.\n", " \n", " Args:\n", " data_path (str): path to the root folder of the data set.\n", " mode (str): subset used. Must correspond to 'train', 'val' or 'test'.\n", " \n", " Returns:\n", " (List[Dict[str, str]]) list of the dictionnaries containing the paths of X-ray images and masks.\n", " \"\"\"\n", " # test if mode is correct\n", " if mode not in [\"train\", \"val\", \"test\"]:\n", " raise ValueError(f\"Please choose a mode in ['train', 'val', 'test']. Current mode is {mode}.\")\n", " \n", " # define empty dictionary\n", " dicts = []\n", " # list all .png files in directory, including the path\n", " paths_xray = glob.glob(os.path.join(data_path, mode, 'img', '*.png'))\n", " # make a corresponding list for all the mask files\n", " for xray_path in paths_xray:\n", " if mode == 'test':\n", " suffix = 'val'\n", " else:\n", " suffix = mode\n", " # find the binary mask that belongs to the original image, based on indexing in the filename\n", " image_index = os.path.split(xray_path)[1].split('_')[-1].split('.')[0]\n", " # define path to mask file based on this index and add to list of mask paths\n", " mask_path = os.path.join(data_path, mode, 'mask', f'VinDr_RibCXR_{suffix}_{image_index}.png')\n", " if os.path.exists(mask_path):\n", " dicts.append({'fixed': xray_path, 'moving': xray_path, 'fixed_mask': mask_path, 'moving_mask': mask_path})\n", " return dicts\n", "\n", "class LoadRibData(monai.transforms.Transform):\n", " \"\"\"\n", " This custom Monai transform loads the data from the rib segmentation dataset.\n", " Defining a custom transform is simple; just overwrite the __init__ function and __call__ function.\n", " \"\"\"\n", " def __init__(self, keys=None):\n", " pass\n", "\n", " def __call__(self, sample):\n", " fixed = Image.open(sample['fixed']).convert('L') # import as grayscale image\n", " fixed = np.array(fixed, dtype=np.uint8)\n", " moving = Image.open(sample['moving']).convert('L') # import as grayscale image\n", " moving = np.array(moving, dtype=np.uint8) \n", " fixed_mask = Image.open(sample['fixed_mask']).convert('L') # import as grayscale image\n", " fixed_mask = np.array(fixed_mask, dtype=np.uint8)\n", " moving_mask = Image.open(sample['moving_mask']).convert('L') # import as grayscale image\n", " moving_mask = np.array(moving_mask, dtype=np.uint8) \n", " # mask has value 255 on rib pixels. Convert to binary array\n", " fixed_mask[np.where(fixed_mask==255)] = 1\n", " moving_mask[np.where(moving_mask==255)] = 1 \n", " return {'fixed': fixed, 'moving': moving, 'fixed_mask': fixed_mask, 'moving_mask': moving_mask, 'img_meta_dict': {'affine': np.eye(2)}, \n", " 'mask_meta_dict': {'affine': np.eye(2)}}"]}, {"cell_type": "markdown", "id": "71f52143", "metadata": {"user_expressions": []}, "source": ["Then we make a training dataset like before. The `Rand2DElasticd` transform here determines how much deformation is in the 'moving' image. "]}, {"cell_type": "code", "execution_count": null, "id": "709daf37", "metadata": {}, "outputs": [], "source": ["train_dict_list = build_dict_ribs(data_path, mode='train')\n", "\n", "# constructDataset from list of paths + transform\n", "transform = monai.transforms.Compose(\n", "[\n", " LoadRibData(),\n", " monai.transforms.AddChanneld(keys=['fixed', 'moving', 'fixed_mask', 'moving_mask']),\n", " monai.transforms.Resized(keys=['fixed', 'moving', 'fixed_mask', 'moving_mask'], spatial_size=(256, 256), mode=['bilinear', 'bilinear', 'nearest', 'nearest']),\n", " monai.transforms.HistogramNormalized(keys=['fixed', 'moving']),\n", " monai.transforms.ScaleIntensityd(keys=['fixed', 'moving'], minv=0.0, maxv=1.0),\n", " monai.transforms.Rand2DElasticd(keys=['moving', 'moving_mask'], spacing=(64, 64), \n", " magnitude_range=(-8, 8), prob=1, mode=['bilinear', 'nearest']), \n", "])\n", "train_dataset = monai.data.Dataset(train_dict_list, transform=transform)"]}, {"cell_type": "markdown", "id": "a6937bc7", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Visualize fixed and moving training images associated to their comparison image with the `visualize_fmc_sample` function below.\n", "\n", "Try different methods to create the comparison image. How well do these different methods allow you to qualitatively assess the quality of the registration?\n", "\n", "More information on this method is available in [the scikit-image documentation](https://scikit-image.org/docs/stable/api/skimage.util.html#skimage.util.compare_images).\n", "\n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "5ad8dd22", "metadata": {"lines_to_next_cell": 2}, "outputs": [], "source": ["def visualize_fmc_sample(sample, method=\"checkerboard\"):\n", " \"\"\"\n", " Plot three images: fixed, moving and comparison.\n", " \n", " Args:\n", " sample (dict): sample of dataset created with `build_dataset`.\n", " method (str): method used by `skimage.util.compare_image`.\n", " \"\"\"\n", " import skimage.util as skut \n", " \n", " skut_methods = [\"diff\", \"blend\", \"checkerboard\"]\n", " if method not in skut_methods:\n", " raise ValueError(f\"Method must be chosen in {skut_methods}.\\n\"\n", " f\"Current value is {method}.\")\n", " \n", " \n", " fixed = np.squeeze(sample['fixed'])\n", " moving = np.squeeze(sample['moving'])\n", " comp_checker = skut.compare_images(fixed, moving, method=method)\n", " axs = plt.figure(constrained_layout=True, figsize=(15, 5)).subplot_mosaic(\"FMC\")\n", " axs['F'].imshow(fixed, cmap='gray')\n", " axs['F'].set_title('Fixed')\n", " axs['M'].imshow(moving, cmap='gray')\n", " axs['M'].set_title('Moving')\n", " axs['C'].imshow(comp_checker, cmap='gray')\n", " axs['C'].set_title('Comparison')\n", " plt.show()"]}, {"cell_type": "code", "execution_count": null, "id": "a9a6d507", "metadata": {}, "outputs": [], "source": ["sample = train_dataset[0]\n", "for method in [\"diff\", \"blend\", \"checkerboard\"]:\n", " print(f\"Method {method}\")\n", " visualize_fmc_sample(sample, method=method)"]}, {"cell_type": "markdown", "id": "424909c6", "metadata": {"user_expressions": []}, "source": ["Now we apply a little trick. Because applying the random deformation in each training iteration will be very costly, we only apply the deformation once and we make a new dataset based on the deformed images. Running the cell below may take a few minutes."]}, {"cell_type": "code", "execution_count": null, "id": "b979401a", "metadata": {}, "outputs": [], "source": ["import tqdm\n", "\n", "train_loader = monai.data.DataLoader(train_dataset, batch_size=1, shuffle=False)\n", "\n", "samples = []\n", "for train_batch in tqdm.tqdm(train_loader):\n", " samples.append(train_batch)\n", "\n", "# Make a new dataset and dataloader using the transformed images\n", "train_dataset = monai.data.Dataset(samples, transform=monai.transforms.SqueezeDimd(keys=['fixed', 'moving', 'fixed_mask', 'moving_mask']))\n", "train_loader = monai.data.DataLoader(train_dataset, batch_size=16, shuffle=False)"]}, {"cell_type": "markdown", "id": "dbc1c8d4", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Create `val_dataset` and `val_loader`, corresponding to the `DataSet` and `DataLoader` for your validation set. The transforms can be the same as in the training set.\n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "b629d6ce", "metadata": {"tags": ["student"]}, "outputs": [], "source": ["# Your code goes here"]}, {"cell_type": "markdown", "id": "605fac19", "metadata": {"user_expressions": []}, "source": ["### Model\n", "\n", "As model, we'll use a U-Net. The input/output structure is quite different from what we've seen before:\n", "- the network takes as input two images: the *moving* and *fixed* images.\n", "- it outputs one tensor representing the *deformation field*.\n", "\n", "\n", "\n", "\n", "This *deformation field* can be applied to the *moving* image with the `monai.networks.blocks.Warp` block of Monai.\n", "\n", "\n", "\n", "\n", "This deformed moving image is then compared to the *fixed* image: if they are similar, the deformation field is correctly registering the moving image on the fixed image. Keep in mind that this is done on **training** data, and we want the U-Net to learn to predict a proper deformation field given two new and unseen images. So we're not optimizing for a pair of images as would be done in conventional iterative registration, but training a model that can generalize.\n", "\n", "\n"]}, {"cell_type": "markdown", "id": "d995b700", "metadata": {"user_expressions": []}, "source": ["Before starting, let's check that you can work on a GPU by runnning the following cell:\n", "- if the device is \"cuda\" you are working on a GPU,\n", "- if the device is \"cpu\" call a teacher."]}, {"cell_type": "code", "execution_count": null, "id": "8188ecf4", "metadata": {"lines_to_next_cell": 2}, "outputs": [], "source": ["if torch.cuda.is_available():\n", " device = torch.device(\"cuda\")\n", "elif torch.backends.mps.is_available():\n", " device = torch.device(\"mps\")\n", " os.environ[\"PYTORCH_ENABLE_MPS_FALLBACK\"]=\"1\"\n", "else:\n", " device = \"cpu\"\n", "print(f'The used device is {device}')"]}, {"cell_type": "markdown", "id": "1dc48933", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Construct a U-Net with suitable settings and name it `model`. Check that you can correctly apply its output to the input moving image with the `warp_layer`!\n", ""]}, {"cell_type": "code", "execution_count": null, "id": "c57f42b0", "metadata": {"tags": ["student"]}, "outputs": [], "source": ["model = # FILL IN\n", "\n", "warp_layer = monai.networks.blocks.Warp().to(device)"]}, {"cell_type": "markdown", "id": "42134eab", "metadata": {}, "source": ["### Objective function\n", "\n", "We evaluate the similarity between the fixed image and the deformed moving image with the `MSELoss()`. The L1 or SSIM losses seen in the previous section could also be used. Furthermore, the deformation field is regularized with `BendingEnergyLoss`. This is a penalty that takes the smoothness of the deformation field into account: if it's not smooth enough, the bending energy is high. Thus, our model will favor smooth deformation fields.\n", "\n", "Finally, we pick an optimizer, in this case again an Adam optimizer."]}, {"cell_type": "code", "execution_count": null, "id": "08851796", "metadata": {}, "outputs": [], "source": ["image_loss = torch.nn.MSELoss()\n", "regularization = monai.losses.BendingEnergyLoss()\n", "optimizer = torch.optim.Adam(model.parameters(), 1e-3)"]}, {"cell_type": "markdown", "id": "79d28fa7", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Add a learning rate scheduler that lowers the learning rate by a factor ten every 100 epochs.\n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "7edc0666", "metadata": {"tags": ["student"]}, "outputs": [], "source": ["# Your code goes here"]}, {"cell_type": "markdown", "id": "fdcaca69", "metadata": {"user_expressions": []}, "source": ["To warp the moving image using the predicted deformation field and *then* compute the loss between the deformed image and the fixed image, we define a forward function which does all this. The output of this function is `pred_image`. "]}, {"cell_type": "code", "execution_count": null, "id": "41b4d27a", "metadata": {}, "outputs": [], "source": ["def forward(batch_data, model):\n", " \"\"\"\n", " Applies the model to a batch of data.\n", " \n", " Args:\n", " batch_data (dict): a batch of samples computed by a DataLoader.\n", " model (Module): a model computing the deformation field.\n", " \n", " Returns:\n", " ddf (Tensor): batch of deformation fields.\n", " pred_image (Tensor): batch of deformed moving images.\n", " \n", " \"\"\"\n", " fixed_image = batch_data[\"fixed\"].to(device).float()\n", " moving_image = batch_data[\"moving\"].to(device).float()\n", " \n", " # predict DDF\n", " ddf = model(torch.cat((moving_image, fixed_image), dim=1))\n", "\n", " # warp moving image and label with the predicted ddf\n", " pred_image = warp_layer(moving_image, ddf)\n", "\n", " return ddf, pred_image"]}, {"cell_type": "markdown", "id": "c16dc690", "metadata": {"user_expressions": []}, "source": ["You can supervise the training process in W&B, in which at each epoch a batch of validation images are used to compute the comparison images of your choice, based on the parameter `method`."]}, {"cell_type": "code", "execution_count": null, "id": "b0488d2a", "metadata": {}, "outputs": [], "source": ["def log_to_wandb(epoch, train_loss, val_loss, pred_batch, fixed_batch, method=\"checkerboard\"):\n", " \"\"\" Function that logs ongoing training variables to W&B \"\"\"\n", " import skimage.util as skut\n", " \n", " log_imgs = []\n", " for fixed_pt, pred_pt in zip(pred_batch, fixed_batch):\n", " fixed_np = np.squeeze(fixed_pt.cpu().detach())\n", " pred_np = np.squeeze(pred_pt.cpu().detach())\n", " comp_checker = skut.compare_images(fixed_np, pred_np, method=method)\n", " log_imgs.append(wandb.Image(comp_checker))\n", "\n", " # Send epoch, losses and images to W&B\n", " wandb.log({'epoch': epoch, 'train_loss': train_loss, 'val_loss': val_loss, 'results': log_imgs})"]}, {"cell_type": "markdown", "id": "b220c79c", "metadata": {"user_expressions": []}, "source": ["### Training time\n", "\n", "Use the following cells to train your network. You may choose different parameters to improve the performance!"]}, {"cell_type": "code", "execution_count": null, "id": "cc1f2412", "metadata": {}, "outputs": [], "source": ["# Choose your parameters\n", "\n", "max_epochs = 200\n", "reg_weight = 0 # By default 0, but you can investigate what it does"]}, {"cell_type": "code", "execution_count": null, "id": "e14cc70c", "metadata": {}, "outputs": [], "source": ["from tqdm import tqdm\n", "\n", "run = wandb.init(\n", " project='tutorial4_registration',\n", " config={\n", " 'lr': optimizer.param_groups[0][\"lr\"],\n", " 'batch_size': train_loader.batch_size,\n", " 'regularization': reg_weight,\n", " 'loss_function': str(image_loss)\n", " }\n", ")\n", "# Do not hesitate to enrich this list of settings to be able to correctly keep track of your experiments!\n", "# For example you should add information on your model...\n", "\n", "run_id = run.id # We remember here the run ID to be able to write the evaluation metrics\n", "\n", "for epoch in tqdm(range(max_epochs)): \n", " model.train()\n", " epoch_loss = 0\n", " for batch_data in train_loader:\n", " optimizer.zero_grad()\n", "\n", " ddf, pred_image = forward(batch_data, model)\n", "\n", " fixed_image = batch_data[\"fixed\"].to(device).float()\n", " reg = regularization(ddf)\n", " loss = image_loss(pred_image, fixed_image) + reg_weight * reg\n", " loss.backward()\n", " optimizer.step()\n", " epoch_loss += loss.item()\n", "\n", " epoch_loss /= len(train_loader)\n", "\n", " model.eval()\n", " val_epoch_loss = 0\n", " for batch_data in val_loader:\n", " ddf, pred_image = forward(batch_data, model)\n", " fixed_image = batch_data[\"fixed\"].to(device).float()\n", " reg = regularization(ddf)\n", " loss = image_loss(pred_image, fixed_image) + reg_weight * reg\n", " val_epoch_loss += loss.item()\n", " val_epoch_loss /= len(val_loader)\n", "\n", " log_to_wandb(epoch, epoch_loss, val_epoch_loss, pred_image, fixed_image)\n", " \n", "run.finish() "]}, {"cell_type": "markdown", "id": "7162bc04", "metadata": {"user_expressions": []}, "source": ["### Evaluation of the trained model\n", "\n", "Now that the model has been trained, it's time to evaluate its performance. Use the code below to visualize samples and deformation fields. \n", "\n", ":::{admonition} Exercise\n", ":class: tip\n", "Are you satisfied with these registration results? Do they seem anatomically plausible? Try out different regularization factors (`reg_weight`) and see what they do to the registration.\n", ":::"]}, {"cell_type": "markdown", "id": "b7978f4f", "metadata": {"tags": ["student"]}, "source": ["Answer: "]}, {"cell_type": "code", "execution_count": null, "id": "255be026", "metadata": {}, "outputs": [], "source": ["def visualize_prediction(sample, model, method=\"checkerboard\"):\n", " \"\"\"\n", " Plot three images: fixed, moving and comparison.\n", " \n", " Args:\n", " sample (dict): sample of dataset created with `build_dataset`.\n", " model (Module): a model computing the deformation field.\n", " method (str): method used by `skimage.util.compare_image`.\n", " \"\"\"\n", " import skimage.util as skut \n", " \n", " skut_methods = [\"diff\", \"blend\", \"checkerboard\"]\n", " if method not in skut_methods:\n", " raise ValueError(f\"Method must be chosen in {skut_methods}.\\n\"\n", " f\"Current value is {method}.\")\n", " \n", " model.eval()\n", " \n", " # Compute deformation field + deformed image\n", " batch_data = {\n", " \"fixed\": sample[\"fixed\"].unsqueeze(0),\n", " \"moving\": sample[\"moving\"].unsqueeze(0),\n", " }\n", " ddf, pred_image = forward(batch_data, model)\n", " ddf = ddf.detach().cpu().numpy().squeeze()\n", " ddf = np.linalg.norm(ddf, axis=0).squeeze()\n", " \n", " # Squeeze images\n", " fixed = np.squeeze(sample[\"fixed\"])\n", " moving = np.squeeze(sample[\"moving\"]) \n", " deformed = np.squeeze(pred_image.detach().cpu())\n", " \n", " # Generate comparison image\n", " comp_checker = skut.compare_images(fixed, deformed, method=method, n_tiles=(4, 4))\n", " \n", " # Plot everything\n", " fig, axs = plt.subplots(1, 5, figsize=(18, 5)) \n", " axs[0].imshow(fixed, cmap='gray')\n", " axs[0].set_title('Fixed')\n", " axs[1].imshow(moving, cmap='gray')\n", " axs[1].set_title('Moving')\n", " axs[2].imshow(deformed, cmap='gray')\n", " axs[2].set_title('Deformed')\n", " axs[3].imshow(comp_checker, cmap='gray')\n", " axs[3].set_title('Comparison') \n", " dpl = axs[4].imshow(ddf, clim=(0, 10))\n", " fig.colorbar(dpl, ax=axs[4])\n", " plt.show() \n", " plt.show()\n", "for sample in val_dataset:\n", " visualize_prediction(sample, model)"]}, {"cell_type": "markdown", "id": "73d539e3", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "Compute the Jacobian determinant at each image voxel. How many of these are negative? Can you improve upon this?\n", ":::"]}, {"cell_type": "markdown", "id": "4910a844", "metadata": {"user_expressions": []}, "source": ["## Part 2 - Equivariance\n", "In this part, we are going to use some concepts that you've learned in the lecture on geometric deep learning. We are going to look at the equivariance properties of a neural network architecture that you should by now be very familiar with: the U-Net. We will again use the chest X-ray segmentation problem. Because training a network is not the focus here, we have pretrained a network that you can use for these experiments."]}, {"cell_type": "markdown", "id": "ad1fc1fe", "metadata": {"user_expressions": []}, "source": ["### Data loading\n", "We will again use the same utility functions as in Tutorial 3 to build a dictionary of files and load rib data."]}, {"cell_type": "code", "execution_count": null, "id": "e7084289", "metadata": {}, "outputs": [], "source": ["import os\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import glob\n", "import monai\n", "from PIL import Image\n", "import torch\n", "\n", "def build_dict_ribs(data_path, mode='train'):\n", " \"\"\"\n", " This function returns a list of dictionaries, each dictionary containing the keys 'img' and 'mask' \n", " that returns the path to the corresponding image.\n", " \n", " Args:\n", " data_path (str): path to the root folder of the data set.\n", " mode (str): subset used. Must correspond to 'train', 'val' or 'test'.\n", " \n", " Returns:\n", " (List[Dict[str, str]]) list of the dictionaries containing the paths of X-ray images and masks.\n", " \"\"\"\n", " # test if mode is correct\n", " if mode not in [\"train\", \"val\", \"test\"]:\n", " raise ValueError(f\"Please choose a mode in ['train', 'val', 'test']. Current mode is {mode}.\")\n", " \n", " # define empty dictionary\n", " dicts = []\n", " # list all .png files in directory, including the path\n", " paths_xray = glob.glob(os.path.join(data_path, mode, 'img', '*.png'))\n", " # make a corresponding list for all the mask files\n", " for xray_path in paths_xray:\n", " if mode == 'test':\n", " suffix = 'val'\n", " else:\n", " suffix = mode\n", " # find the binary mask that belongs to the original image, based on indexing in the filename\n", " image_index = os.path.split(xray_path)[1].split('_')[-1].split('.')[0]\n", " # define path to mask file based on this index and add to list of mask paths\n", " mask_path = os.path.join(data_path, mode, 'mask', f'VinDr_RibCXR_{suffix}_{image_index}.png')\n", " if os.path.exists(mask_path):\n", " dicts.append({'img': xray_path, 'mask': mask_path})\n", " return dicts\n", "\n", "class LoadRibData(monai.transforms.Transform):\n", " \"\"\"\n", " This custom Monai transform loads the data from the rib segmentation dataset.\n", " Defining a custom transform is simple; just overwrite the __init__ function and __call__ function.\n", " \"\"\"\n", " def __init__(self, keys=None):\n", " pass\n", "\n", " def __call__(self, sample):\n", " image = Image.open(sample['img']).convert('L') # import as grayscale image\n", " image = np.array(image, dtype=np.uint8)\n", " mask = Image.open(sample['mask']).convert('L') # import as grayscale image\n", " mask = np.array(mask, dtype=np.uint8)\n", " # mask has value 255 on rib pixels. Convert to binary array\n", " mask[np.where(mask==255)] = 1\n", " return {'img': image, 'mask': mask, 'img_meta_dict': {'affine': np.eye(2)}, \n", " 'mask_meta_dict': {'affine': np.eye(2)}}"]}, {"cell_type": "markdown", "id": "5a4461dc", "metadata": {"user_expressions": []}, "source": ["Use the cell below to make a validation loader with a single image. This is sufficient for the small experiment that you will perform."]}, {"cell_type": "code", "execution_count": null, "id": "bc8678fc", "metadata": {}, "outputs": [], "source": ["validation_dict_list = build_dict_ribs(data_path, mode='val')\n", "validation_transform = monai.transforms.Compose(\n", " [\n", " LoadRibData(),\n", " monai.transforms.AddChanneld(keys=['img', 'mask']),\n", " monai.transforms.HistogramNormalized(keys=['img']), \n", " monai.transforms.ScaleIntensityd(keys=['img'], minv=0, maxv=1),\n", " monai.transforms.Zoomd(keys=['img', 'mask'], zoom=0.25, mode=['bilinear', 'nearest'], keep_size=False),\n", " # monai.transforms.RandSpatialCropd(keys=['img', 'mask'], roi_size=[384, 384], random_size=False)\n", " monai.transforms.SpatialCropd(keys=['img', 'mask'], roi_center=[300, 300], roi_size=[384 + 64, 384]) \n", " ]\n", ")\n", "validation_data = monai.data.CacheDataset([validation_dict_list[3]], transform=validation_transform)\n", "validation_loader = monai.data.DataLoader(validation_data, batch_size=1, shuffle=False)"]}, {"cell_type": "markdown", "id": "73a6cc23", "metadata": {"user_expressions": []}, "source": ["### Loading a pretrained model\n", "We have already trained a model for you, the parameters of which were shared in JupyterLab as well.\n", "**Note**: if you downloaded the data set yourself, the model should be in the same folder as the images.\n", "If you already downloaded the data set but not the model, the model file is available [here](https://surfdrive.surf.nl/files/index.php/s/613zrvr0RDYZDqp)."]}, {"cell_type": "code", "execution_count": null, "id": "c896c819", "metadata": {}, "outputs": [], "source": ["pretrained_file = path.join(data_path, \"trainedUNet.pt\")"]}, {"cell_type": "markdown", "id": "62c5c2e5", "metadata": {"user_expressions": []}, "source": ["Next, we initialize a standard U-Net architecture and load the parameters of the pretrained network using the `load_state_dict` function."]}, {"cell_type": "code", "execution_count": null, "id": "4ee0c6c0", "metadata": {}, "outputs": [], "source": ["import torch\n", "import monai\n", "\n", "# Check whether we're using a GPU\n", "if torch.cuda.is_available():\n", " n_gpus = torch.cuda.device_count() # Total number of GPUs\n", " gpu_idx = random.randint(0, n_gpus - 1) # Random GPU index\n", " device = torch.device(f'cuda:{gpu_idx}')\n", " print('Using GPU: {}'.format(device))\n", "else:\n", " device = torch.device('cpu')\n", " print('GPU not found. Using CPU.')\n", "\n", "model = monai.networks.nets.UNet(\n", " spatial_dims=2,\n", " in_channels=1,\n", " out_channels=1,\n", " channels = (8, 16, 32, 64, 128),\n", " strides=(2, 2, 2, 2),\n", " num_res_units=2,\n", " dropout=0.5\n", ").to(device)\n", "\n", "model.load_state_dict(torch.load(pretrained_file))\n", "model.eval()"]}, {"cell_type": "markdown", "id": "2177e35b", "metadata": {}, "source": ["Let's use the pretrained network to segment (part of) our image. Run the cell below."]}, {"cell_type": "code", "execution_count": null, "id": "8bb097f1", "metadata": {}, "outputs": [], "source": ["for sample in validation_loader:\n", "\n", " img = sample['img'][:, :, :384, :384] \n", " mask = sample['mask'][:, :, :384, :384]\n", " output_noshift = torch.sigmoid(model(img.to(device))).detach().cpu().numpy().squeeze() \n", " \n", " fig, ax = plt.subplots(1,2, figsize = [12, 10]) \n", " # Plot X-ray image\n", " ax[0].imshow(img.squeeze(), 'gray')\n", " # Plot ground truth\n", " mask = np.squeeze(mask)\n", " overlay_mask = np.ma.masked_where(mask == 0, mask == 1)\n", " ax[0].imshow(overlay_mask, 'Greens', alpha = 0.7, clim=[0,1], interpolation='nearest')\n", " ax[0].set_title('Ground truth')\n", " # Plot output\n", " overlay_output = np.ma.masked_where(output_noshift < 0.1, output_noshift > 0.99)\n", " ax[1].imshow(img.squeeze(), 'gray')\n", " ax[1].imshow(overlay_output.squeeze(), 'Reds', alpha = 0.7, clim=[0,1])\n", " ax[1].set_title('Prediction')\n", " plt.show() "]}, {"cell_type": "markdown", "id": "0d09c619", "metadata": {}, "source": ["As you can see, segmentation isn't perfect, but that's also not the goal of this exercise. What we are going to look into is the translation equivariance (**Lecture 8**) of the U-Net. That is: if you translate the image by $d$ pixels, does the output also simply change by $d$ pixels. Note that this is a nice feature to have for a segmentation network: in principle we'd want our network to give us the same label for a pixel regardless of where the image was cut. The image below visualizes this principle. For segmentation of the pixels in the orange square, it shouldn't matter if we provide the red square or the green square as input to the U-Net.\n", "\n", ""]}, {"cell_type": "markdown", "id": "d20369c1", "metadata": {"user_expressions": []}, "source": [":::{admonition} Exercise\n", ":class: tip\n", "What do you think will happen to the U-Net's prediction if we give it a slightly shifted version of the image as input?\n", ":::"]}, {"cell_type": "markdown", "id": "80db0448", "metadata": {"user_expressions": []}, "source": ["Now we make a small script that performs the above experiment. First, we obtain the segmentation in the red box and we call this `output_noshift`. Then we shift the green box by an offset and each time obtain a segmentation in this box using the same model. We start small with a shift/offset of just a **single pixel**.\n", "\n", ":::{admonition} Exercise\n", ":class: tip\n", "Run the cell below and observe the outputs. Can you spot differences between the two segmentation masks?\n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "198e78d2", "metadata": {}, "outputs": [], "source": ["offset = 1\n", "\n", "for sample in validation_loader:\n", "\n", " # Original image\n", " img = sample['img'][:, :, :384, :384] \n", " mask = sample['mask'][:, :, :384, :384]\n", " output_noshift = torch.sigmoid(model(img.to(device))).detach().cpu().numpy().squeeze() \n", "\n", " # Plot X-ray image\n", " fig, ax = plt.subplots(1,2, figsize = [12, 10]) \n", " ax[0].imshow(img.squeeze(), 'gray')\n", " # Plot ground truth\n", " mask = np.squeeze(mask)\n", " overlay_mask = np.ma.masked_where(mask == 0, mask == 1)\n", " ax[0].imshow(overlay_mask, 'Greens', alpha = 0.7, clim=[0,1], interpolation='nearest')\n", " ax[0].set_title('Ground truth')\n", " # Plot output\n", " overlay_output = np.ma.masked_where(output_noshift < 0.1, output_noshift >0.99)\n", " ax[1].imshow(img.squeeze(), 'gray')\n", " ax[1].imshow(overlay_output.squeeze(), 'Reds', alpha = 0.7, clim=[0,1])\n", " ax[1].set_title('Prediction')\n", " plt.show()\n", " \n", " # Shifted image\n", " img = sample['img'][:, :, offset:offset+384, :384]\n", " mask = sample['mask'][:, :, offset:offset+384, :384]\n", " output = torch.sigmoid(model(img.to(device))).detach().cpu().numpy().squeeze()\n", "\n", " # Plot X-ray image\n", " fig, ax = plt.subplots(1,2, figsize = [12, 10])\n", " ax[0].imshow(img.squeeze(), 'gray')\n", " # Plot ground truth\n", " mask = np.squeeze(mask)\n", " overlay_mask = np.ma.masked_where(mask == 0, mask == 1)\n", " ax[0].imshow(overlay_mask, 'Greens', alpha = 0.7, clim=[0,1], interpolation='nearest')\n", " ax[0].set_title('Ground truth shifted')\n", " # Plot output\n", " overlay_output = np.ma.masked_where(output < 0.1, output >0.99)\n", " ax[1].imshow(img.squeeze(), 'gray')\n", " ax[1].imshow(overlay_output.squeeze(), 'Reds', alpha = 0.7, clim=[0,1])\n", " ax[1].set_title('Prediction shifted')\n", " plt.show()"]}, {"cell_type": "markdown", "id": "4091bf35", "metadata": {"user_expressions": []}, "source": ["To highlight the differences between both segmentation masks a bit more, we make a difference image. We correct for the shift applied so that we're not comparing apples and oranges. The next cell shows the difference image between the original image and what we get when we process an image that is shifted by one pixel.\n", "\n", ":::{admonition} Exercise\n", ":class: tip\n", "Given these results, is a U-Net translation equivariant, invariant, or neither?\n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "d66af728", "metadata": {}, "outputs": [], "source": ["plt.figure(figsize=(6, 6))\n", "diffout = output_noshift[offset:, :384] - output[:-offset, :384]\n", "plt.imshow(diffout, cmap='seismic', clim=[-1, 1])\n", "plt.title('Offset {}'.format(offset))\n", "plt.colorbar()\n", "plt.show()"]}, {"cell_type": "markdown", "id": "e5980242", "metadata": {"user_expressions": []}, "source": ["We can repeat this for larger offsets. Let's take offsets up to 64 pixels, and each time compute the difference between the original and shifted image, in a subimage that should be unaffected by the shift. We store the L1 norm of the difference image in an array `norms` and plot these as a function of offset.\n", "\n", ":::{admonition} Exercise\n", ":class: tip\n", "The resulting plot shows that the U-Net is equivariant for none of the translations. This is due to a combination of border effects and downsampling layers. However, the plot also shows a particular pattern, in which the norm *dips* every 16 pixels of offset. Can you explain this based on the U-Net architecture? \n", ":::"]}, {"cell_type": "code", "execution_count": null, "id": "645df5e4", "metadata": {}, "outputs": [], "source": ["norms = []\n", "offsets = []\n", "plot_differences = False # Set to True to plot difference images for every offset\n", "\n", "img = sample['img'][:, :, :384, :384] \n", "mask = sample['mask'][:, :, :384, :384]\n", "output_noshift = torch.sigmoid(model(img.to(device))).detach().cpu().numpy().squeeze() \n", "\n", "for offset in range(1, 65):\n", " for sample in validation_loader:\n", " img = sample['img'][:, :, offset:offset+384, :384]\n", " mask = sample['mask'][:, :, offset:offset+384, :384]\n", "\n", " output = torch.sigmoid(model(img.to(device))).detach().cpu().numpy().squeeze() \n", "\n", " diffout = (output_noshift[offset:, :384] - output[:-offset, :384])[100:284, 100:284]\n", " offsets.append(offset)\n", " norms.append(np.sum(np.abs(diffout)))\n", " if plot_differences:\n", " plt.figure()\n", " plt.imshow(diffout, cmap='seismic', clim=[-1, 1])\n", " plt.title(f\"Offset {offset}\")\n", " plt.colorbar()\n", " plt.show()\n", "\n", "plt.figure()\n", "plt.plot(offsets, norms)\n", "plt.xlabel('Offset')\n", "plt.ylabel('Difference')\n", "plt.show()"]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 5}
\ No newline at end of file
diff --git a/genindex.html b/genindex.html
index 52eb108..b2f2a79 100644
--- a/genindex.html
+++ b/genindex.html
@@ -166,7 +166,8 @@
In the previous tutorials, you have familiarized yourself with PyTorch, MONAI, and Weights & Biases. In last week’s lecture, you have learned about registration. In this tutorial, you will develop, train, and evaluate a CNN for denoising of (synthetic) CT images.
+
First, let’s take care of the necessities:
+
+
If you’re using Google Colab, make sure to select a GPU Runtime.
+
Connect to Weights & Biases using the code below.
+
Install a few libraries that we will use in this tutorial.
In this tutorial, you will reconstruct CT images. To not use too much disk storage, we will synthetise images on the fly using the Deep Inversion Validation Library (dival). These are 2D images with \(128\times 128\) pixels that contain a random number of ellipses with random sizes and random intensities.
+
First, make a dataset of ellipses. This will make an object that we can call for images using a generator. Next, we take a look at what this dataset contains. We will use the generator to ask for a sample. Each sample contains a sinogram and a ground truth (original) synthetic image that we can visualize. You may recall from the lecture that the sinogram is made up of integrals along projections. The horizontal axis in the sinogram corresponds to the location \(s\) along the detector, the vertical axis to the projection angle \(\theta\).
Run the cell below to show a sinogram and image in the dataset.
+
+
+
importnumpyasnp
+importmatplotlib.pyplotasplt
+
+# Get a sample from the generator
+sinogram,ground_truth=next(dat_gen)
+fig,axs=plt.subplots(1,2,figsize=(10,5))
+
+# Show the sinogram
+axs[0].imshow(sinogram,cmap='gray',extent=[0,183,-90,90])
+axs[0].set_title('Sinogram')
+axs[0].set_xlabel('$s$')
+axs[0].set_ylabel('$\Theta$')
+
+# Show the ground truth image
+axs[1].imshow(ground_truth,cmap='gray')
+axs[1].set_title('Ground truth')
+axs[1].set_xlabel('$x$')
+axs[1].set_ylabel('$y$')
+plt.show()
+
+
+
+
+
+
Exercise
+
What kind of CT reconstruction problem is this? Limited-view or sparse-angle CT? Why?
+
+
+
Answer key
+
This is a sparse-angle CT recontruction problem. The view spans 180 degrees, but the number of angles is low.
+
+
Not only does the sinogram contain few angles, it also contains added white noise. If we simply backproject the sinogram to the image domain we end up with a low-quality image. Let’s give it a try using the standard Filtered Backprojection (FBP) algorithm for CT and its implementation in scikit-image.
+
+
+
importskimage.transformassktr
+
+# Get a sample from the generator
+sinogram,ground_truth=next(dat_gen)
+sinogram=np.asarray(sinogram).transpose()
+
+# This defines the projectiona angles
+theta=np.linspace(-90.,90.,sinogram.shape[1],endpoint=True)
+
+# Perform FBP
+fbp_recon=sktr.iradon(sinogram,theta=theta,filter_name='ramp')[28:-27,28:-27]
+fig,axs=plt.subplots(1,3,figsize=(12,4))
+axs[0].imshow(sinogram.transpose(),cmap='gray',extent=[0,183,-90,90])
+axs[0].set_title('Sinogram')
+axs[0].set_xlabel('$s$')
+axs[0].set_ylabel('$\Theta$')
+axs[1].imshow(ground_truth,cmap='gray',clim=[0,1])
+axs[1].set_title('Ground truth')
+axs[1].set_xlabel('$x$')
+axs[1].set_ylabel('$y$')
+axs[2].imshow(fbp_recon,cmap='gray',clim=[0,1])
+axs[2].set_title('FBP')
+axs[2].set_xlabel('$x$')
+axs[2].set_ylabel('$y$')
+plt.show()
+
+
+
+
+
+
Exercise
+
What do you think of the quality of the reconstructed FBP algorithm? Use the cell below to quantify the similarity between the images using the structural similarity index (SSIM). Does this reflect your intuition? Also compute the PSNR using the peak_signal_noise_ratio method in scikit-image.
Our (or your) goal now is to obtain high(er) quality reconstructed images based on the sinogram measurements. As you have seen in the lecture, this can be done in four ways:
+
+
Train a reconstruction method that directly maps from the measurement (sinogram) domain to the image domain.
+
Preprocessing Clean up the sinogram using a neural network, then backproject to the image domain.
+
Postprocessing First backproject to the image domain, then improve the reconstruction using a neural network.
+
Iterative methods that integrate data consistency.
+
+
Here, we will follow the third approach, postprocessing. We create reconstructions from the generated sinograms using filtered backprojection and use a neural network to learn corrections on this FBP image and improve the reconstruction, as shown in the image below. The data that we need for training this network is the reconstructions from FBP, and the ground-truth reconstructions from the dival dataset.
+
+
We will make a training dataset of 512 samples from the ellipses dival dataset that we store in a MONAI DataSet. The code below does this in four steps:
+
+
Create a dival generator that creates sinograms and ground-truth reconstructions.
+
Make a dictionary (like we did in the previous tutorial) that contains the ground-truth reconstructions and the reconstructions constructed by FBP as separate keys.
+
Define the transforms for the data (also like the previous tutorial). In this case we require an additional ‘channels’ dimension, as that is what the neural network expects. We will not make use of extra data augmentation.
+
Construct the dataset using the dictionary and the defined transform.
+
+
+
+
importtqdm
+importmonai
+
+theta=np.linspace(-90.,90.,sinogram.shape[1],endpoint=True)
+
+# Make a generator for the training part of the dataset
+train_gen=dataset.generator(part='train')
+train_samples=[]
+
+# Make a list of (in this case) 512 random training samples. We store the filtered backprojection (FBP) and ground truth image
+# in a dictionary for each sample, and add these to a list.
+fornsintqdm.tqdm(range(512)):
+ sinogram,ground_truth=next(train_gen)
+ sinogram=np.asarray(sinogram).transpose()
+ fbp_recon=sktr.iradon(sinogram,theta=theta,filter_name='ramp')[28:-27,28:-27]
+ train_samples.append({'fbp':fbp_recon,'ground_truth':np.asarray(ground_truth)})
+
+# You can add or remove transforms here
+train_transform=monai.transforms.Compose([
+ monai.transforms.AddChanneld(keys=['fbp','ground_truth'])
+])
+
+# Use the list of dictionaries and the transform to initialize a MONAI CacheDataset
+train_dataset=monai.data.CacheDataset(train_samples,transform=train_transform)
+
+
+
+
+
+
Exercise
+
Also make a validation dataset and call it val_dataset. This dataset can be smaller, e.g., 64 or 128 samples.
Now, make a dataloader for both the validation and training data, called train_loader and validation_loader, that we can use for sampling batches during training of the network. Give them a reasonable batch size, e.g., 16.
Now that we have datasets and dataloaders, the next step is to define a model, optimizer and criterion. Because we want to improve the FBP-reconstructed image, we are dealing with an image-to-image task. A standard U-Net as implemented in MONAI is therefore a good starting point. First, make sure that you are using the GPU (CUDA), otherwise training will be extremely slow.
+
+
+
importtorch
+
+iftorch.cuda.is_available():
+ device=torch.device("cuda")
+eliftorch.backends.mps.is_available():
+ device=torch.device("mps")
+else:
+ device="cpu"
+print(f'The used device is {device}')
+
+
+
+
+
+
Exercise
+
Initialize a U-Net with the correct settings, e.g. channels and dimensions, and call it model. Here, it’s convenient to use the BasicUNet as implemented in MONAI.
An important aspect is the loss function that you will use to optimize the model. The problem that we are trying to solve using a neural network is a regression problem, which differs from the classification approach we covered in the segmentation tutorial. Instead of classifying each pixel as a certain class, we alter their intensities to obtain a better overall reconstruction of the image.
+
Because this task is substantially different, we need to change our loss function. In the previous tutorial we used the Dice loss, which measures the overlap for each of the classes to segment. In this case, an L2 (mean squared error) or L1 (mean average error) loss suits our objective. Alternatively, we can use a loss that aims to maximize the structural similarity (SSIM). For this, we use the kornia library.
+
+
+
importkornia
+
+# Three loss functions, turn them on or off by commenting
+
+loss_function=torch.nn.MSELoss()
+# loss_function = torch.nn.L1Loss()
+# loss_function = kornia.losses.SSIMLoss(window_size=3)
+
+
+
+
+
As in previous tutorials, we use an adaptive SGD (Adam) optimizer to train our network. This tutorial, we add a learning rate scheduler. This scheduler lowers the learning rate every step_size steps, meaning that the optimizer will take smaller steps in the direction of the gradient after a set amount of epochs. Therefore, the optimizer can potentially find a better local minimum for the weights of the neural network.
What does the model learn? Look carefully at how we determine the output of the model. Can you describe what happens in the following line: outputs=model(batch_data['fbp'].float().to(device))+batch_data["fbp"].float().to(device)?
+
+
+
Answer key
+
fromtqdm.notebookimporttqdm
+importwandb
+fromskimage.metricsimportstructural_similarityasssim
+
+
+run=wandb.init(
+ project='tutorial4_reconstruction',
+ config={
+ 'loss function':str(loss_function),
+ 'lr':optimizer.param_groups[0]["lr"],
+ 'batch_size':train_loader.batch_size,
+ }
+)
+# Do not hesitate to enrich this list of settings to be able to correctly keep track of your experiments!
+# For example you should include information on your model architecture
+
+run_id=run.id# We remember here the run ID to be able to write the evaluation metrics
+
+deflog_to_wandb(epoch,train_loss,val_loss,batch_data,outputs):
+ """ Function that logs ongoing training variables to W&B """
+
+ # Create list of images that have segmentation masks for model output and ground truth
+ # log_imgs = [wandb.Image(PIL.Image.fromarray(img.detach().cpu().numpy())) for img in outputs]
+ val_ssim=[]
+ forim_idinrange(batch_data['ground_truth'].shape[0]):
+ val_ssim.append(ssim(batch_data['ground_truth'].detach().cpu().numpy()[im_id,0,:,:].squeeze(),
+ outputs.detach().cpu().numpy()[im_id,0,:,:].squeeze()))
+ val_ssim=np.mean(np.asarray(val_ssim))
+ # Send epoch, losses and images to W&B
+ wandb.log({'epoch':epoch,'train_loss':train_loss,'val_loss':val_loss,'val_ssim':val_ssim})
+
+forepochintqdm(range(75)):
+ model.train()
+ epoch_loss=0
+ step=0
+ forbatch_dataintrain_loader:
+ step+=1
+ optimizer.zero_grad()
+ outputs=model(batch_data["fbp"].float().to(device))+batch_data["fbp"].float().to(device)
+ loss=loss_function(outputs,batch_data["ground_truth"].to(device))
+ loss.backward()
+ optimizer.step()
+ epoch_loss+=loss.item()
+ train_loss=epoch_loss/step
+ # validation part
+ step=0
+ val_loss=0
+ forbatch_datainvalidation_loader:
+ step+=1
+ model.eval()
+ outputs=model(batch_data['fbp'].float().to(device))+batch_data["fbp"].float().to(device)
+ loss=loss_function(outputs,batch_data['ground_truth'].to(device))
+ val_loss+=loss.item()
+ val_loss=val_loss/step
+ log_to_wandb(epoch,train_loss,val_loss,batch_data,outputs)
+ scheduler.step()
+
+# Store the network parameters
+torch.save(model.state_dict(),r'trainedUNet.pt')
+run.finish()
+
+
+
+
+
Exercise
+
Now make a DataSet and DataLoader for the test set. Just a handful of images should be enough.
Visualize a number of reconstructions from the neural network and compare them to the fbp reconstructed images, using the code below. The performance of the network is evaluated using the structural similarity function in scikit-image. Does the neural network improve this metric a lot compared to the filtered back projection?
The SSIM is definitely improved compared to the standard filtered back projection (FBP). CNN results should be in the order of ~0.8 SSIM.
+
The output images of the CNN are less noisy than the FBP reconstructions. However, they’re also a bit more blotchy/cartoonish if you use the CNN.
+
+
+
+
Exercise
+
Instead of a U-Net, try a different model, e.g., a SegResNet in MONAI.
+Evaluate how the different loss functions affect the performance of the network. Notes that the SSIM on the validation set is also written to Weights & Biases during training. Which loss leads to the best SSIM scores? Which loss results in the worst SSIM scores?
+
+
+
Answer key
+
In general, using an SSIM loss will lead to better SSIM scores. The L1 loss is also expected to lead to better then the MSE loss, as it’s less susceptible to outliers and will smooth the resulting images less.
So far, you have used a post-processing approach for reconstruction. In the lecture, we have discussed an alternative pre-processing approach, in which the sinogram image is improved before FBP. This additional exercise is entirely optional, but you could try to turn the current model into such a model, and see if the results that you get are better or worse than the results obtained so far. Good luck!
We will register chest X-ray images. We will reuse the data of Tutorial 3. As always, we first set the paths. This should be the path ending in ‘ribs’. If you don’t have the data set anymore, you can download it using the lines below:
# ONLY IF YOU USE JUPYTER: ADD PATH ⌨ï¸
+data_path=r'/Users/jmwolterink/Downloads/ribs'# WHEREDIDYOUPUTTHEDATA?
+
+
+
+
+
+
+
# ONLY IF YOU USE COLAB: ADD PATH ⌨ï¸
+fromgoogle.colabimportdrive
+
+drive.mount('/content/drive')
+data_path=r'/content/drive/My Drive/Tutorial3'
+
+
+
+
+
+
+
# check if data_path exists:
+importos
+
+ifnotos.path.exists(data_path):
+ print("Please update your data path to an existing folder.")
+elifnotset(["train","val","test"]).issubset(set(os.listdir(data_path))):
+ print("Please update your data path to the correct folder (should contain train, val and test folders).")
+else:
+ print("Congrats! You selected the correct folder :)")
+
In this part we prepare all the tools needed to load and visualize our samples. One thing we could do is perform inter-patient registration, i.e., register two chest X-ray images of different patients. However, this is a very challenging problem. Instead, to make our life a bit easier, we will perform intra-patient registration: register two images of the same patient. For each patient, we make a synthetic moving image by applying some random elastic deformations. To build this data set, we we used the Rand2DElasticd transform on both the image and the mask. We will use a neural network to learn the deformation field between the fixed image and the moving image.
+
+
Similarly as in Tutorial 3, make a dictionary of the image file names.
+
+
+
importos
+importnumpyasnp
+importmatplotlib.pyplotasplt
+importglob
+importmonai
+fromPILimportImage
+importtorch
+
+defbuild_dict_ribs(data_path,mode='train'):
+ """
+ This function returns a list of dictionaries, each dictionary containing the keys 'img' and 'mask'
+ that returns the path to the corresponding image.
+
+ Args:
+ data_path (str): path to the root folder of the data set.
+ mode (str): subset used. Must correspond to 'train', 'val' or 'test'.
+
+ Returns:
+ (List[Dict[str, str]]) list of the dictionnaries containing the paths of X-ray images and masks.
+ """
+ # test if mode is correct
+ ifmodenotin["train","val","test"]:
+ raiseValueError(f"Please choose a mode in ['train', 'val', 'test']. Current mode is {mode}.")
+
+ # define empty dictionary
+ dicts=[]
+ # list all .png files in directory, including the path
+ paths_xray=glob.glob(os.path.join(data_path,mode,'img','*.png'))
+ # make a corresponding list for all the mask files
+ forxray_pathinpaths_xray:
+ ifmode=='test':
+ suffix='val'
+ else:
+ suffix=mode
+ # find the binary mask that belongs to the original image, based on indexing in the filename
+ image_index=os.path.split(xray_path)[1].split('_')[-1].split('.')[0]
+ # define path to mask file based on this index and add to list of mask paths
+ mask_path=os.path.join(data_path,mode,'mask',f'VinDr_RibCXR_{suffix}_{image_index}.png')
+ ifos.path.exists(mask_path):
+ dicts.append({'fixed':xray_path,'moving':xray_path,'fixed_mask':mask_path,'moving_mask':mask_path})
+ returndicts
+
+classLoadRibData(monai.transforms.Transform):
+ """
+ This custom Monai transform loads the data from the rib segmentation dataset.
+ Defining a custom transform is simple; just overwrite the __init__ function and __call__ function.
+ """
+ def__init__(self,keys=None):
+ pass
+
+ def__call__(self,sample):
+ fixed=Image.open(sample['fixed']).convert('L')# import as grayscale image
+ fixed=np.array(fixed,dtype=np.uint8)
+ moving=Image.open(sample['moving']).convert('L')# import as grayscale image
+ moving=np.array(moving,dtype=np.uint8)
+ fixed_mask=Image.open(sample['fixed_mask']).convert('L')# import as grayscale image
+ fixed_mask=np.array(fixed_mask,dtype=np.uint8)
+ moving_mask=Image.open(sample['moving_mask']).convert('L')# import as grayscale image
+ moving_mask=np.array(moving_mask,dtype=np.uint8)
+ # mask has value 255 on rib pixels. Convert to binary array
+ fixed_mask[np.where(fixed_mask==255)]=1
+ moving_mask[np.where(moving_mask==255)]=1
+ return{'fixed':fixed,'moving':moving,'fixed_mask':fixed_mask,'moving_mask':moving_mask,'img_meta_dict':{'affine':np.eye(2)},
+ 'mask_meta_dict':{'affine':np.eye(2)}}
+
+
+
+
+
Then we make a training dataset like before. The Rand2DElasticd transform here determines how much deformation is in the ‘moving’ image.
+
+
+
train_dict_list=build_dict_ribs(data_path,mode='train')
+
+# constructDataset from list of paths + transform
+transform=monai.transforms.Compose(
+[
+ LoadRibData(),
+ monai.transforms.AddChanneld(keys=['fixed','moving','fixed_mask','moving_mask']),
+ monai.transforms.Resized(keys=['fixed','moving','fixed_mask','moving_mask'],spatial_size=(256,256),mode=['bilinear','bilinear','nearest','nearest']),
+ monai.transforms.HistogramNormalized(keys=['fixed','moving']),
+ monai.transforms.ScaleIntensityd(keys=['fixed','moving'],minv=0.0,maxv=1.0),
+ monai.transforms.Rand2DElasticd(keys=['moving','moving_mask'],spacing=(64,64),
+ magnitude_range=(-8,8),prob=1,mode=['bilinear','nearest']),
+])
+train_dataset=monai.data.Dataset(train_dict_list,transform=transform)
+
+
+
+
+
+
Exercise
+
Visualize fixed and moving training images associated to their comparison image with the visualize_fmc_sample function below.
+
Try different methods to create the comparison image. How well do these different methods allow you to qualitatively assess the quality of the registration?
Now we apply a little trick. Because applying the random deformation in each training iteration will be very costly, we only apply the deformation once and we make a new dataset based on the deformed images. Running the cell below may take a few minutes.
+
+
+
importtqdm
+
+train_loader=monai.data.DataLoader(train_dataset,batch_size=1,shuffle=False)
+
+samples=[]
+fortrain_batchintqdm.tqdm(train_loader):
+ samples.append(train_batch)
+
+# Make a new dataset and dataloader using the transformed images
+train_dataset=monai.data.Dataset(samples,transform=monai.transforms.SqueezeDimd(keys=['fixed','moving','fixed_mask','moving_mask']))
+train_loader=monai.data.DataLoader(train_dataset,batch_size=16,shuffle=False)
+
+
+
+
+
+
Exercise
+
Create val_dataset and val_loader, corresponding to the DataSet and DataLoader for your validation set. The transforms can be the same as in the training set.
As model, we’ll use a U-Net. The input/output structure is quite different from what we’ve seen before:
+
+
the network takes as input two images: the moving and fixed images.
+
it outputs one tensor representing the deformation field.
+
+
+
This deformation field can be applied to the moving image with the monai.networks.blocks.Warp block of Monai.
+
+
This deformed moving image is then compared to the fixed image: if they are similar, the deformation field is correctly registering the moving image on the fixed image. Keep in mind that this is done on training data, and we want the U-Net to learn to predict a proper deformation field given two new and unseen images. So we’re not optimizing for a pair of images as would be done in conventional iterative registration, but training a model that can generalize.
+
+
Before starting, let’s check that you can work on a GPU by runnning the following cell:
+
+
if the device is “cuda†you are working on a GPU,
+
if the device is “cpu†call a teacher.
+
+
+
+
iftorch.cuda.is_available():
+ device=torch.device("cuda")
+eliftorch.backends.mps.is_available():
+ device=torch.device("mps")
+ os.environ["PYTORCH_ENABLE_MPS_FALLBACK"]="1"
+else:
+ device="cpu"
+print(f'The used device is {device}')
+
+
+
+
+
+
Exercise
+
Construct a U-Net with suitable settings and name it model. Check that you can correctly apply its output to the input moving image with the warp_layer!
+
+
+
+
+
model=# FILL IN
+
+warp_layer=monai.networks.blocks.Warp().to(device)
+
We evaluate the similarity between the fixed image and the deformed moving image with the MSELoss(). The L1 or SSIM losses seen in the previous section could also be used. Furthermore, the deformation field is regularized with BendingEnergyLoss. This is a penalty that takes the smoothness of the deformation field into account: if it’s not smooth enough, the bending energy is high. Thus, our model will favor smooth deformation fields.
+
Finally, we pick an optimizer, in this case again an Adam optimizer.
Add a learning rate scheduler that lowers the learning rate by a factor ten every 100 epochs.
+
+
+
+
# Your code goes here
+
+
+
+
+
To warp the moving image using the predicted deformation field and then compute the loss between the deformed image and the fixed image, we define a forward function which does all this. The output of this function is pred_image.
+
+
+
defforward(batch_data,model):
+ """
+ Applies the model to a batch of data.
+
+ Args:
+ batch_data (dict): a batch of samples computed by a DataLoader.
+ model (Module): a model computing the deformation field.
+
+ Returns:
+ ddf (Tensor): batch of deformation fields.
+ pred_image (Tensor): batch of deformed moving images.
+
+ """
+ fixed_image=batch_data["fixed"].to(device).float()
+ moving_image=batch_data["moving"].to(device).float()
+
+ # predict DDF
+ ddf=model(torch.cat((moving_image,fixed_image),dim=1))
+
+ # warp moving image and label with the predicted ddf
+ pred_image=warp_layer(moving_image,ddf)
+
+ returnddf,pred_image
+
+
+
+
+
You can supervise the training process in W&B, in which at each epoch a batch of validation images are used to compute the comparison images of your choice, based on the parameter method.
+
+
+
deflog_to_wandb(epoch,train_loss,val_loss,pred_batch,fixed_batch,method="checkerboard"):
+ """ Function that logs ongoing training variables to W&B """
+ importskimage.utilasskut
+
+ log_imgs=[]
+ forfixed_pt,pred_ptinzip(pred_batch,fixed_batch):
+ fixed_np=np.squeeze(fixed_pt.cpu().detach())
+ pred_np=np.squeeze(pred_pt.cpu().detach())
+ comp_checker=skut.compare_images(fixed_np,pred_np,method=method)
+ log_imgs.append(wandb.Image(comp_checker))
+
+ # Send epoch, losses and images to W&B
+ wandb.log({'epoch':epoch,'train_loss':train_loss,'val_loss':val_loss,'results':log_imgs})
+
Use the following cells to train your network. You may choose different parameters to improve the performance!
+
+
+
# Choose your parameters
+
+max_epochs=200
+reg_weight=0# By default 0, but you can investigate what it does
+
+
+
+
+
+
+
fromtqdmimporttqdm
+
+run=wandb.init(
+ project='tutorial4_registration',
+ config={
+ 'lr':optimizer.param_groups[0]["lr"],
+ 'batch_size':train_loader.batch_size,
+ 'regularization':reg_weight,
+ 'loss_function':str(image_loss)
+ }
+)
+# Do not hesitate to enrich this list of settings to be able to correctly keep track of your experiments!
+# For example you should add information on your model...
+
+run_id=run.id# We remember here the run ID to be able to write the evaluation metrics
+
+forepochintqdm(range(max_epochs)):
+ model.train()
+ epoch_loss=0
+ forbatch_dataintrain_loader:
+ optimizer.zero_grad()
+
+ ddf,pred_image=forward(batch_data,model)
+
+ fixed_image=batch_data["fixed"].to(device).float()
+ reg=regularization(ddf)
+ loss=image_loss(pred_image,fixed_image)+reg_weight*reg
+ loss.backward()
+ optimizer.step()
+ epoch_loss+=loss.item()
+
+ epoch_loss/=len(train_loader)
+
+ model.eval()
+ val_epoch_loss=0
+ forbatch_datainval_loader:
+ ddf,pred_image=forward(batch_data,model)
+ fixed_image=batch_data["fixed"].to(device).float()
+ reg=regularization(ddf)
+ loss=image_loss(pred_image,fixed_image)+reg_weight*reg
+ val_epoch_loss+=loss.item()
+ val_epoch_loss/=len(val_loader)
+
+ log_to_wandb(epoch,epoch_loss,val_epoch_loss,pred_image,fixed_image)
+
+run.finish()
+
Now that the model has been trained, it’s time to evaluate its performance. Use the code below to visualize samples and deformation fields.
+
+
Exercise
+
Are you satisfied with these registration results? Do they seem anatomically plausible? Try out different regularization factors (reg_weight) and see what they do to the registration.
+
+
Answer:
+
+
+
defvisualize_prediction(sample,model,method="checkerboard"):
+ """
+ Plot three images: fixed, moving and comparison.
+
+ Args:
+ sample (dict): sample of dataset created with `build_dataset`.
+ model (Module): a model computing the deformation field.
+ method (str): method used by `skimage.util.compare_image`.
+ """
+ importskimage.utilasskut
+
+ skut_methods=["diff","blend","checkerboard"]
+ ifmethodnotinskut_methods:
+ raiseValueError(f"Method must be chosen in {skut_methods}.\n"
+ f"Current value is {method}.")
+
+ model.eval()
+
+ # Compute deformation field + deformed image
+ batch_data={
+ "fixed":sample["fixed"].unsqueeze(0),
+ "moving":sample["moving"].unsqueeze(0),
+ }
+ ddf,pred_image=forward(batch_data,model)
+ ddf=ddf.detach().cpu().numpy().squeeze()
+ ddf=np.linalg.norm(ddf,axis=0).squeeze()
+
+ # Squeeze images
+ fixed=np.squeeze(sample["fixed"])
+ moving=np.squeeze(sample["moving"])
+ deformed=np.squeeze(pred_image.detach().cpu())
+
+ # Generate comparison image
+ comp_checker=skut.compare_images(fixed,deformed,method=method,n_tiles=(4,4))
+
+ # Plot everything
+ fig,axs=plt.subplots(1,5,figsize=(18,5))
+ axs[0].imshow(fixed,cmap='gray')
+ axs[0].set_title('Fixed')
+ axs[1].imshow(moving,cmap='gray')
+ axs[1].set_title('Moving')
+ axs[2].imshow(deformed,cmap='gray')
+ axs[2].set_title('Deformed')
+ axs[3].imshow(comp_checker,cmap='gray')
+ axs[3].set_title('Comparison')
+ dpl=axs[4].imshow(ddf,clim=(0,10))
+ fig.colorbar(dpl,ax=axs[4])
+ plt.show()
+ plt.show()
+forsampleinval_dataset:
+ visualize_prediction(sample,model)
+
+
+
+
+
+
Exercise
+
Compute the Jacobian determinant at each image voxel. How many of these are negative? Can you improve upon this?
In this part, we are going to use some concepts that you’ve learned in the lecture on geometric deep learning. We are going to look at the equivariance properties of a neural network architecture that you should by now be very familiar with: the U-Net. We will again use the chest X-ray segmentation problem. Because training a network is not the focus here, we have pretrained a network that you can use for these experiments.
We will again use the same utility functions as in Tutorial 3 to build a dictionary of files and load rib data.
+
+
+
importos
+importnumpyasnp
+importmatplotlib.pyplotasplt
+importglob
+importmonai
+fromPILimportImage
+importtorch
+
+defbuild_dict_ribs(data_path,mode='train'):
+ """
+ This function returns a list of dictionaries, each dictionary containing the keys 'img' and 'mask'
+ that returns the path to the corresponding image.
+
+ Args:
+ data_path (str): path to the root folder of the data set.
+ mode (str): subset used. Must correspond to 'train', 'val' or 'test'.
+
+ Returns:
+ (List[Dict[str, str]]) list of the dictionaries containing the paths of X-ray images and masks.
+ """
+ # test if mode is correct
+ ifmodenotin["train","val","test"]:
+ raiseValueError(f"Please choose a mode in ['train', 'val', 'test']. Current mode is {mode}.")
+
+ # define empty dictionary
+ dicts=[]
+ # list all .png files in directory, including the path
+ paths_xray=glob.glob(os.path.join(data_path,mode,'img','*.png'))
+ # make a corresponding list for all the mask files
+ forxray_pathinpaths_xray:
+ ifmode=='test':
+ suffix='val'
+ else:
+ suffix=mode
+ # find the binary mask that belongs to the original image, based on indexing in the filename
+ image_index=os.path.split(xray_path)[1].split('_')[-1].split('.')[0]
+ # define path to mask file based on this index and add to list of mask paths
+ mask_path=os.path.join(data_path,mode,'mask',f'VinDr_RibCXR_{suffix}_{image_index}.png')
+ ifos.path.exists(mask_path):
+ dicts.append({'img':xray_path,'mask':mask_path})
+ returndicts
+
+classLoadRibData(monai.transforms.Transform):
+ """
+ This custom Monai transform loads the data from the rib segmentation dataset.
+ Defining a custom transform is simple; just overwrite the __init__ function and __call__ function.
+ """
+ def__init__(self,keys=None):
+ pass
+
+ def__call__(self,sample):
+ image=Image.open(sample['img']).convert('L')# import as grayscale image
+ image=np.array(image,dtype=np.uint8)
+ mask=Image.open(sample['mask']).convert('L')# import as grayscale image
+ mask=np.array(mask,dtype=np.uint8)
+ # mask has value 255 on rib pixels. Convert to binary array
+ mask[np.where(mask==255)]=1
+ return{'img':image,'mask':mask,'img_meta_dict':{'affine':np.eye(2)},
+ 'mask_meta_dict':{'affine':np.eye(2)}}
+
+
+
+
+
Use the cell below to make a validation loader with a single image. This is sufficient for the small experiment that you will perform.
We have already trained a model for you, the parameters of which were shared in JupyterLab as well.
+Note: if you downloaded the data set yourself, the model should be in the same folder as the images.
+If you already downloaded the data set but not the model, the model file is available here.
As you can see, segmentation isn’t perfect, but that’s also not the goal of this exercise. What we are going to look into is the translation equivariance (Lecture 8) of the U-Net. That is: if you translate the image by \(d\) pixels, does the output also simply change by \(d\) pixels. Note that this is a nice feature to have for a segmentation network: in principle we’d want our network to give us the same label for a pixel regardless of where the image was cut. The image below visualizes this principle. For segmentation of the pixels in the orange square, it shouldn’t matter if we provide the red square or the green square as input to the U-Net.
+
+
+
Exercise
+
What do you think will happen to the U-Net’s prediction if we give it a slightly shifted version of the image as input?
+
+
Now we make a small script that performs the above experiment. First, we obtain the segmentation in the red box and we call this output_noshift. Then we shift the green box by an offset and each time obtain a segmentation in this box using the same model. We start small with a shift/offset of just a single pixel.
+
+
Exercise
+
Run the cell below and observe the outputs. Can you spot differences between the two segmentation masks?
To highlight the differences between both segmentation masks a bit more, we make a difference image. We correct for the shift applied so that we’re not comparing apples and oranges. The next cell shows the difference image between the original image and what we get when we process an image that is shifted by one pixel.
+
+
Exercise
+
Given these results, is a U-Net translation equivariant, invariant, or neither?
We can repeat this for larger offsets. Let’s take offsets up to 64 pixels, and each time compute the difference between the original and shifted image, in a subimage that should be unaffected by the shift. We store the L1 norm of the difference image in an array norms and plot these as a function of offset.
+
+
Exercise
+
The resulting plot shows that the U-Net is equivariant for none of the translations. This is due to a combination of border effects and downsampling layers. However, the plot also shows a particular pattern, in which the norm dips every 16 pixels of offset. Can you explain this based on the U-Net architecture?
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/objects.inv b/objects.inv
index 37c37fd..44d0883 100644
--- a/objects.inv
+++ b/objects.inv
@@ -2,7 +2,5 @@
# Project: Python
# Version:
# The remainder of this file is compressed using zlib.
-xÚµ•Ïnƒ0Æï</ÐMôÏeצjZUJ;FibZ£´åí‚j€ÖÍ0zA‘ýùËÏŽ„0™‘p 'Ÿ?‚
-Q˜´Ñ‡Ôinëcð-“(jQGñJ †@£üÁ-/×D#Cé´¾?‹Çƒwh3®"¶ó:/ãLç
-¶‡×› V8/Á¸.Õv
äUFÿBtÀE
-v.ÄÖŽ€¸dÏhN¨¼ËÐpµoë¯;£¥ªhóñ¢A[Ë;µEz“/"´µb1$º¯n„iï1Êj€¶šŒFšé(+Úš½€[•€UP¿æhó›h:À]Ï€KšéDSƒ@S8ëEf Ú\GÛ
7Ag‘ü±aö¥KÑôMÜŠ´_ØÄšš¸ÉïyÁ6´
\ No newline at end of file
+xÚµ•Ënƒ0E÷|?Väµè¶‹*ªE%R—–cF€j{±Óò÷AT°šv tƒÐ<®ÏÜA8ë>ÃÒ&’Ÿ@†‹(L»è]f•wÍkð]– hŠzo *&?¨Õâ’hË_0q:]NˆïåýÑY49—Û;UTq®
+ »ãóÕ+K@Û>Õr-ä¥*ŒnB´ÀEf.ÄNŽ€¸d¨Ï(ÍQs¹gš§ý@C¢ù;ãAÞXË‹´“"Œµb1¤ª¶€7BW´}Œ’òÐV“ÑHžŽ’" Ùh0uÛXýWùkŽæßDQw=.ÉÓ‰¢Ü
{º´Æ‰Á¾üÍ×Ñrâæ&D’—£åˆÛº5ÍëÎá7?Ó%å¡mƒÞ5üÇý|¨l†zxM—ÀȆm¬í‰Ûü§|‘%/G
\ No newline at end of file
diff --git a/search.html b/search.html
index 3d55396..3969e23 100644
--- a/search.html
+++ b/search.html
@@ -168,7 +168,8 @@