Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2914 release note, and what's new for v0.7 #2992

Merged
merged 2 commits into from
Sep 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 54 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,50 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).

## [Unreleased]
* renamed model's `n_classes` to `num_classes`

## [0.7.0] - 2021-09-24
### Added
* Overview of [new features in v0.7](docs/source/whatsnew_0_7.md)
* Initial phase of major usability improvements in `monai.transforms` to support input and backend in PyTorch and NumPy
* Performance enhancements, with [profiling and tuning guides](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md) for typical use cases
* Reproducing [training modules and workflows](https://github.com/Project-MONAI/tutorials/tree/master/kaggle/RANZCR/4th_place_solution) of state-of-the-art Kaggle competition solutions
* 24 new transforms, including
* `OneOf` meta transform
* DeepEdit guidance signal transforms for interactive segmentation
* Transforms for self-supervised pre-training
* Integration of [NVIDIA Tools Extension](https://developer.nvidia.com/blog/nvidia-tools-extension-api-nvtx-annotation-tool-for-profiling-code-in-python-and-c-c/) (NVTX)
* Integration of [cuCIM](https://github.com/rapidsai/cucim)
* Stain normalization and contextual grid for digital pathology
* `Transchex` network for vision-language transformers for chest X-ray analysis
* `DatasetSummary` utility in `monai.data`
* `WarmupCosineSchedule`
* Deprecation warnings and documentation support for better backwards compatibility
* Padding with additional `kwargs` and different backend API
* Additional options such as `dropout` and `norm` in various networks and their submodules

### Changed
* Base Docker image upgraded to `nvcr.io/nvidia/pytorch:21.08-py3` from `nvcr.io/nvidia/pytorch:21.06-py3`
* Deprecated input argument `n_classes`, in favor of `num_classes`
* Deprecated input argument `dimensions` and `ndims`, in favor of `spatial_dims`
* Updated the Sphinx-based documentation theme for better readability
* `NdarrayTensor` type is replaced by `NdarrayOrTensor` for simpler annotations
* Attention-based network blocks now support both 2D and 3D inputs

### Removed
* The deprecated `TransformInverter`, in favor of `monai.transforms.InvertD`
* GitHub self-hosted CI/CD pipelines for nightly and post-merge tests
* `monai.handlers.utils.evenly_divisible_all_gather`
* `monai.handlers.utils.string_list_all_gather`

### Fixed
* A Multi-thread cache writing issue in `LMDBDataset`
* Output shape convention inconsistencies of the image readers
* Output directory and file name flexibility issue for `NiftiSaver`, `PNGSaver`
* Requirement of the `label` field in test-time augmentation
* Input argument flexibility issues for `ThreadDataLoader`
* Decoupled `Dice` and `CrossEntropy` intermediate results in `DiceCELoss`
* Improved documentation, code examples, and warning messages in various modules
* Various usability issues reported by users

## [0.6.0] - 2021-07-08
### Added
Expand All @@ -25,6 +68,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
* Fully compatible with PyTorch 1.9
* `--disttests` and `--min` options for `runtests.sh`
* Initial support of pre-merge tests with Nvidia Blossom system

### Changed
* Base Docker image upgraded to `nvcr.io/nvidia/pytorch:21.06-py3` from
`nvcr.io/nvidia/pytorch:21.04-py3`
Expand All @@ -34,11 +78,13 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
* Unified the terms: `post_transform` is renamed to `postprocessing`, `pre_transform` is renamed to `preprocessing`
* Unified the postprocessing transforms and event handlers to accept the "channel-first" data format
* `evenly_divisible_all_gather` and `string_list_all_gather` moved to `monai.utils.dist`

### Removed
* Support of 'batched' input for postprocessing transforms and event handlers
* `TorchVisionFullyConvModel`
* `set_visible_devices` utility function
* `SegmentationSaver` and `TransformsInverter` handlers

### Fixed
* Issue of handling big-endian image headers
* Multi-thread issue for non-random transforms in the cache-based datasets
Expand Down Expand Up @@ -269,9 +315,11 @@ the postprocessing steps should be used before calling the metrics methods
* Optionally depend on PyTorch-Ignite v0.4.2 instead of v0.3.0
* Optionally depend on torchvision, ITK
* Enhanced CI tests with 8 new testing environments

### Removed
* `MONAI/examples` folder (relocated into [`Project-MONAI/tutorials`](https://github.com/Project-MONAI/tutorials))
* `MONAI/research` folder (relocated to [`Project-MONAI/research-contributions`](https://github.com/Project-MONAI/research-contributions))

### Fixed
* `dense_patch_slices` incorrect indexing
* Data type issue in `GeneralizedWassersteinDiceLoss`
Expand Down Expand Up @@ -302,6 +350,7 @@ the postprocessing steps should be used before calling the metrics methods
* Cross-platform CI tests supporting multiple Python versions
* Optional import mechanism
* Experimental features for third-party transforms integration

### Changed
> For more details please visit [the project wiki](https://github.com/Project-MONAI/MONAI/wiki/Notable-changes-between-0.1.0-and-0.2.0)
* Core modules now require numpy >= 1.17
Expand All @@ -311,9 +360,11 @@ the postprocessing steps should be used before calling the metrics methods
* Base Docker image upgraded to `nvcr.io/nvidia/pytorch:20.03-py3` from `nvcr.io/nvidia/pytorch:19.10-py3`
* Enhanced local testing tools
* Documentation website domain changed to https://docs.monai.io

### Removed
* Support of Python < 3.6
* Automatic installation of optional dependencies including pytorch-ignite, nibabel, tensorboard, pillow, scipy, scikit-image

### Fixed
* Various issues in type and argument names consistency
* Various issues in docstring and documentation site
Expand All @@ -336,7 +387,8 @@ the postprocessing steps should be used before calling the metrics methods

[highlights]: https://github.com/Project-MONAI/MONAI/blob/master/docs/source/highlights.md

[Unreleased]: https://github.com/Project-MONAI/MONAI/compare/0.6.0...HEAD
[Unreleased]: https://github.com/Project-MONAI/MONAI/compare/0.7.0...HEAD
[0.7.0]: https://github.com/Project-MONAI/MONAI/compare/0.6.0...0.7.0
[0.6.0]: https://github.com/Project-MONAI/MONAI/compare/0.5.3...0.6.0
[0.5.3]: https://github.com/Project-MONAI/MONAI/compare/0.5.0...0.5.3
[0.5.0]: https://github.com/Project-MONAI/MONAI/compare/0.4.0...0.5.0
Expand Down
Binary file added docs/images/nsight_comparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/source/whatsnew.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@ What's New
.. toctree::
:maxdepth: 1

whatsnew_0_7.md
whatsnew_0_6.md
whatsnew_0_5.md
2 changes: 1 addition & 1 deletion docs/source/whatsnew_0_6.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# What's new in 0.6 🎉🎉
# What's new in 0.6

- Decollating mini-batches as an essential post-processing step
- Pythonic APIs to load the pretrained models from Clara Train MMARs
Expand Down
62 changes: 62 additions & 0 deletions docs/source/whatsnew_0_7.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# What's new in 0.7 🎉🎉

- Performance enhancements with profiling and tuning guides
- Major usability improvements in `monai.transforms`
- Reimplementing state-of-the-art Kaggle solutions
- Vision-language multimodal transformer architectures

## Performance enhancements with profiling and tuning guides

Model training is often a time-consuming step during deep learning development,
especially for medical imaging applications. Even with powerful hardware (e.g.
CPU/GPU with large RAM), the workflows often require careful profiling and
tuning to achieve high performance. MONAI has been focusing on performance
enhancements, and in this version, [a fast model training
guide](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_model_training_guide.md)
is provided to help build highly performant workflows, with a comprehensive
overview of the profiling tools and practical strategies. The following figure
shows the use of [Nvidia Nsight™ Systems](https://developer.nvidia.com/nsight-systems) for system-wide performance analysis during
a performance enhancement study.
![nsight_vis](../images/nsight_comparison.png)

With the performance profiling and enhancements, several typical use cases were studied to
improve the training efficiency. The following figure shows that fast
training using MONAI can be 20 times faster than a regular baseline ([learn
more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb)).
![fast_training](../images/fast_training.png)

## Major usability improvements in `monai.transforms` for NumPy/PyTorch inputs and backends

MONAI starts to roll out major usability enhancements for the
`monai.transforms` module. Many transforms are now supporting both NumPy and
PyTorch, as input types and computational backends.

One benefit of these enhancements is that the users can now better leverage the
GPUs for preprocessing. By transferring the input data onto GPU using
`ToTensor` or `EnsureType`, and applying the GPU-based transforms to the data,
[the tutorial of spleen
segmentation](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb)
shows the great potential of using the flexible modules for fast and efficient
training.

## Reimplementing state-of-the-art Kaggle solutions

With this release, we actively evaluate and enhance the quality and flexibility
of the MONAI core modules, using the public Kaggle challenge as a testbed. [A
reimplementation](https://github.com/Project-MONAI/tutorials/tree/master/kaggle/RANZCR/4th_place_solution)
of a state-of-the-art solution at [Kaggle RANZCR CLiP - Catheter and Line
Position
Challenge](https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification)
is made available in this version.

## Vision-language multimodal transformers

In this release, MONAI adds support for training multimodal (vision + language)
transformers that can handle both image and textual data. MONAI introduces the
`TransCheX` model which consists of vision, language, and mixed-modality
transformer layers for processing chest X-ray and their corresponding
radiological reports within a unified framework. In addition to `TransCheX`,
users have the flexibility to alter the architecture by varying the number of
vision, language and mixed-modality layers and customizing the classification
head. In addition, the model can be initialized from pre-trained BERT language
models for fine-tuning.