Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MONAI transforms can't only support Tensor backend #2933

Closed
Nic-Ma opened this issue Sep 13, 2021 · 11 comments · Fixed by #2949
Closed

MONAI transforms can't only support Tensor backend #2933

Nic-Ma opened this issue Sep 13, 2021 · 11 comments · Fixed by #2949
Assignees
Labels
enhancement New feature or request

Comments

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Sep 13, 2021

Is your feature request related to a problem? Please describe.
Currently, some transforms only support Tensor backend, which will break previous numpy based transform chains.
Need to update them to make sure all the transforms support both numpy and Tensor backends or only numpy backend.

@Nic-Ma Nic-Ma self-assigned this Sep 13, 2021
@Nic-Ma Nic-Ma added enhancement New feature or request and removed Feature request labels Sep 13, 2021
@Nic-Ma
Copy link
Contributor Author

Nic-Ma commented Sep 13, 2021

Hi @rijobro ,

I will try to enhance them ASAP before 0.7 release if you have no concerns.

Thanks in advance.

@rijobro
Copy link
Contributor

rijobro commented Sep 13, 2021

Hi, all of the transforms that I have been updating will perform any necessary numpy<->torch conversions. Do you mean that the updated transforms might now return the data in a different format (e.g., torch instead of numpy) and some of the old transforms can't handle that?

@Nic-Ma
Copy link
Contributor Author

Nic-Ma commented Sep 13, 2021

Hi @rijobro ,

Yes, that's the problem, old transforms can't handle the Tensor output, and another problem is that: if we input numpy array to random transform and output Tensor, if the prob is < 1.0, then some output will be numpy array and some will be Tensor, then we can't collate them in DataLoader.

Thanks.

@rijobro
Copy link
Contributor

rijobro commented Sep 13, 2021

I've tried to make it so that for random transforms, I've tried to perform the data conversion that would have been applied if the transform had have been applied. If I've missed that anywhere, please let me know and I'll update. (example of conversion when transform is not applied here).

@Nic-Ma
Copy link
Contributor Author

Nic-Ma commented Sep 13, 2021

Thanks @rijobro ,

I already fixed this kind issue in RandZoom transform yesterday, I haven't checked other transforms. But I still think we should not only support Tensor to be compatible with other old transforms.

Thanks.

@wyli
Copy link
Contributor

wyli commented Sep 13, 2021

Just to clarify, my understanding is that the backend is about how the computation is implemented, using numpy or torch API. For now it's fine to have numpy-only or torch-only backend. But at the moment for #2231 we should ensure all the transforms support these input-output types:

  • input: numpy, output: numpy (not breaking the previous versions of preprocessing)
  • (optionally) input: torch, output: torch (initial new feature for running all GPU-based transform chains)

@rijobro
Copy link
Contributor

rijobro commented Sep 13, 2021

My vision is that, if necessary, each transform will convert to the correct image type. That is to say, if the computation requires torch.Tensor and input is numpy, then convert. To reduce the number of conversions, I don't convert back to the original type at the end of the transform. This means that all transforms will need to be able to accept torch or numpy input. By the time I've finished updating all transforms, this will be the case.

However, it seems that since we're currently part-way through the changes, some transforms will be in chains that would have originally received numpy input, but now they're getting torch tensors instead.

I suppose as a temporary fix, we could modify all problematic transforms to have something like this at the start of the __call__ function:

def __call__(img):
    img, *_ = convert_data_type(img, np.ndarray)
    ...

@Nic-Ma
Copy link
Contributor Author

Nic-Ma commented Sep 13, 2021

Hi @rijobro ,

Yes, that's exactly what I planed to do for this ticket.

Thanks.

@rijobro
Copy link
Contributor

rijobro commented Sep 13, 2021

Great.

However, this change was something I was trying to avoid (converting back to original type). Rather than converting back at the end of the transform, all other transforms should be able to handle torch and numpy input.

@Nic-Ma
Copy link
Contributor Author

Nic-Ma commented Sep 13, 2021

Hi @rijobro ,

I agree with your point, maybe when we added Tensor support to all the existing transforms, we can refactor the output logic.
But for now, we need to make sure at least the existing numpy transform chain can work after 0.7 release.

Thanks.

@rijobro
Copy link
Contributor

rijobro commented Sep 13, 2021

Ok, sounds reasonable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants