You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in the tutorial "Pre-trained SwinUNETR Backbone on ~50,000 3D Volumes", it mentioned that the pre-training was done using ~50,000 3D volumes from 14 datasets.
However, when I checked the referenced paper "Disruptive Autoencoders: Leveraging Low-level features for 3D Medical Image Pre-training", it seems that only ~10,000 volumes were used for pre-training.
I have a few questions about this:
Are there any papers or technical reports that provide more details about the results using the ~50,000 3D volumes?
Did the inclusion of additional data lead to any performance improvements?
Which specific weight file corresponds to the pre-training with ~50,000 3D volumes?
I’d greatly appreciate any clarification or pointers to relevant resources.
Thank you for your time and clarification!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, thank you for sharing this great work!
I noticed that in the tutorial "Pre-trained SwinUNETR Backbone on ~50,000 3D Volumes", it mentioned that the pre-training was done using ~50,000 3D volumes from 14 datasets.
However, when I checked the referenced paper "Disruptive Autoencoders: Leveraging Low-level features for 3D Medical Image Pre-training", it seems that only ~10,000 volumes were used for pre-training.
I have a few questions about this:
I’d greatly appreciate any clarification or pointers to relevant resources.
Thank you for your time and clarification!
Beta Was this translation helpful? Give feedback.
All reactions