-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] A graphical debugger for image loader #160
Comments
Thanks @vikashg for the issue. For now, we could interleave the SaveImageD transform in the pre-tranforms and post-transforms to save the Numpy (actually the keyed torch tensors in the MONAI/torch dataset) into nii on disk (requiring moving tensors off the GPU device if used), each with an app configured postfix to distinguish the file names of the intermediate images. This is no replacement for supporting stepping through the transforms and inspecting the image in real time, but can be used for debugging for now. I'd maybe create an example for it (in my app for another project, I did chain up multi SaveImageD in the post transforms just to get the intermediate images, so I know this approach works). Also, for the rendering of the final result (mask) image along with input, there is the Render Server piece coming soon, and its volumetric rendering, even in cine mode, really blew me away! Beside, it supports DICOM too. |
@vikashg I have quickly updated the UNETR example app, just to quickly show that interleaving SaveImageD in the pre-transforms would at least preserve the intermediate images when the app runs, as seen in the output below, followed by the code snippet to make it happen.
The (temp) code that makes saving intermediate images happen
|
We can inherit MONAI's Compose class and override https://docs.monai.io/en/latest/_modules/monai/transforms/compose.html#Compose.__call__ def __call__(self, input_):
for _transform in self.transforms:
input_ = apply_transform(_transform, input_, self.map_items, self.unpack_items)
return input_ |
I know, it is doable in a OO way, but given that there are many transforms, and the image capture/validation needs to be on specific transforms, having a blanket instrumentation may not work; for timing, maybe, but needs to be a noop at production time. As I said, I'd rather create a new transform or extension or other means to allow developer the flexibility to target specific transforms, defeatable for production of course. There are also needs to support plugin for and launching the visualization module, so it is not just a simple instrumentation, otherwise a decorator (remember it is used in some forms of the transforms) would suffice. |
@MMelQin Sounds great! It would be awesome to see the visualization :) |
@MMelQin I see the SaveImageD is a part of the composition if I understand correctly. This SaveImageD saves the transformed image at each step. This is what I would do also. The problem is that once tested the user needs to go back and delete all those lines, which can make the debugging operation a bit tedious. |
Agree @vikashg . Using SaveImageD is just a stop-gap till we have the proper impl that cleanly integrates with visualization, perf analysis, image QA etc. |
Related Issue/PR in MONAI Core: There is another item named |
This one can be done with |
Is your feature request related to a problem? Please describe.
This feature request is related to the debugging the input data loader in a meaningful and tractable manner.
Describe the solution you'd like
As monai uses a suite of data loaders and applies the transformations before feeding the data to the neural network. It is also important to check if the data preprocessing is correct. As of now what we do is that we will write a few lines of code between our transformations and save the image as a nifty or a .png image to make. However, this is not an efficient practice.
A good solution would be, if there is a debug flag which can be flipped to plot the input data along with all the transformations in a single image. Something like this.
In such a manner the whole input pipeline can be visually debugged in one step.
Describe alternatives you've considered
There are some alternatives that exist for example Tensorboard. But Tensorboard doesn't provide a timeline of images processing. This might not be such a big problem when it comes to regular computer vision images, but it is really important in medical imaging.
Additional context
Maintaining a visual history of image processing (transformations) applied before inference should be useful for radiologist while interpreting the results. It will also have a visual appeal for debugging purposes for developers. I can work on it.
The text was updated successfully, but these errors were encountered: