Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Downsampling Augmentation Transform #3781

Closed
lyndonboone opened this issue Feb 8, 2022 · 9 comments · Fixed by #6806
Closed

Downsampling Augmentation Transform #3781

lyndonboone opened this issue Feb 8, 2022 · 9 comments · Fixed by #6806

Comments

@lyndonboone
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Working with augmentation for DL pipelines, it would be nice to have a transform that downsamples the image by a factor (as opposed to a specified tensor shape or voxel size) while preserving the shape of the input to simulate images acquired at a low resolution that have been resampled to a larger shape.

Describe the solution you'd like
An ideal solution would downsample the image by a specified factor, then upsample back to the original shape. Similar to RandomAnisotropy from TorchIO (https://torchio.readthedocs.io/transforms/augmentation.html) except with the capability to do isotropic downsampling.

Describe alternatives you've considered
I can get close to the desired behavior by placing two instances of a Zoom transform one after another, with reciprocal zoom factors and keep_size=False (e.g., Compose([Zoom(zoom=0.5, keep_size=False), Zoom(zoom=2.0, keep_size=False)])); however, this solution doesn't guarantee that the output shape will be the same as the input (although it will likely be close). For example, if the input shape is [192, 256, 256] and the desired downsampling factor is 3, the output shape will be [192, 255, 255].

There may be an alternative solution that I haven't considered. Otherwise, if you think this is worth adding as a new transform (or an adaptation to an existing transform), I'd be more than happy to try to submit a PR.

@wyli
Copy link
Contributor

wyli commented Feb 8, 2022

For the alternative solution, I think you can use ResizeWithPadOrCrop as the final part of the compose to make the shapes consistent

class ResizeWithPadOrCrop(Transform):

if the input shape is not known beforehand, perhaps a new transform is needed? would be great to have your PR on this...

@lyndonboone
Copy link
Contributor Author

Hi @wyli , thanks for the comment.

Yes, exactly; I was thinking of the case where the input shape is not known beforehand. I'd be happy to submit a PR for a new transform (assuming it isn't more appropriate to adapt an existing transform). What should the new transform be called? Perhaps something with Resolution in the name?

@wyli
Copy link
Contributor

wyli commented Feb 9, 2022

great thanks! perhaps the name could be RandReinterpolate?

@rijobro
Copy link
Contributor

rijobro commented Feb 11, 2022

We would want a random and non-random version I suppose. @lyndonboone you could calculate the input size at the start of the __call__ method and then create the ResizeWithPadOrCrop accordingly. In this way, you wouldn't need to a priori know the image shape.

@ashwinkumargb
Copy link

Has there been any updates to this? I would be interested in this functionality especially something similar to batchgenerator's SimulateLowResolutionTransform: https://github.com/MIC-DKFZ/batchgenerators/blob/master/batchgenerators/transforms/resample_transforms.py

@wyli
Copy link
Contributor

wyli commented Jul 29, 2023

@aaronkujawa has kindly shared his implementations but not yet has the bandwidth to integrate them into monai core https://github.com/aaronkujawa/MONAI/blob/04e24e3e1d6030d70063aedc97884f4cd6eebb75/monai/transforms/spatial/dictionary.py#L2151 I'm labelling this as contribution wanted.

@wyli wyli closed this as completed in #6806 Aug 1, 2023
wyli pushed a commit that referenced this issue Aug 1, 2023
…d corresponding unit tests (#6806)

Fixes #3781.

### Description
Random simulation of low resolution corresponding to nnU-Net's
(https://github.com/MIC-DKFZ/batchgenerators/blob/7651ece69faf55263dd582a9f5cbd149ed9c3ad0/batchgenerators/transforms/resample_transforms.py#L23).
First, the array/tensor is resampled at lower resolution as determined
by the zoom_factor which is uniformly sampled from the `zoom_range`.
Then, the array/tensor is resampled at the original resolution. MONAI's
`Resize` transform is used for the resampling operations.

### Types of changes
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [x] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [x] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [x] In-line docstrings updated.
- [x] Documentation updated, tested `make html` command in the `docs/`
folder.

---------

Signed-off-by: Aaron Kujawa <askujawa@gmail.com>
@function2-llx
Copy link
Contributor

I've got one question about RandSimulateLowResolution. The nnUNet upsamples the downsampled image with cubic interpolation (according to the supplementary material, Section 4). Currently, there's only linear interpolation available for 3D images.

@aaronkujawa
Copy link
Contributor

I've got one question about RandSimulateLowResolution. The nnUNet upsamples the downsampled image with cubic interpolation (according to the supplementary material, Section 4). Currently, there's only linear interpolation available for 3D images.

That's true. This transform uses MONAI's Resize transform which is based on torch's interpolate function, which does not offer cubic interpolation in 3D. I guess one could try to replace Resize with MONAI's Resample transform or ResampleToMatch if cubic interpolation is a requirement.

@function2-llx
Copy link
Contributor

FYI, it seems that the following one will work with 3D cubic interpolation now:

import numpy as np

from monai import transforms as mt

def main():
	img = np.random.randn(1, 6, 6, 6)
    zoom = np.array([2, 2, 2])
    affine = mt.Affine(
        scale_params=(1 / zoom).tolist(),
        spatial_size=(np.array(img.shape[1:]) * zoom).astype(np.int32).tolist(),
        mode=3,
        image_only=True,
    )
    zoomed_img = affine(img)

if __name__ == '__main__':
    main()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants