Skip to content

Commit

Permalink
Merge pull request #299 from microsoft/master
Browse files Browse the repository at this point in the history
merge master
  • Loading branch information
SparkSnail authored Jun 23, 2021
2 parents 805e773 + 27e123d commit f9dbdb4
Show file tree
Hide file tree
Showing 313 changed files with 15,091 additions and 13,703 deletions.
15 changes: 7 additions & 8 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.

FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu18.04
FROM nvidia/cuda:10.2-cudnn8-runtime-ubuntu18.04

ARG NNI_RELEASE

Expand Down Expand Up @@ -43,24 +43,23 @@ RUN ln -s python3 /usr/bin/python
#
RUN python3 -m pip install --upgrade pip==20.2.4 setuptools==50.3.2

# numpy 1.14.3 scipy 1.1.0
RUN python3 -m pip --no-cache-dir install numpy==1.14.3 scipy==1.1.0
# numpy 1.19.5 scipy 1.5.4
RUN python3 -m pip --no-cache-dir install numpy==1.19.5 scipy==1.5.4

#
# TensorFlow
#
RUN python3 -m pip --no-cache-dir install tensorflow==2.3.1

#
# Keras 2.1.6
# Keras
#
RUN python3 -m pip --no-cache-dir install Keras==2.1.6
RUN python3 -m pip --no-cache-dir install Keras==2.4.3

#
# PyTorch
#
RUN python3 -m pip --no-cache-dir install torch==1.6.0
RUN python3 -m pip install torchvision==0.7.0
RUN python3 -m pip --no-cache-dir install torch==1.7.1 torchvision==0.8.2 pytorch-lightning==1.3.3

#
# sklearn 0.24.1
Expand All @@ -70,7 +69,7 @@ RUN python3 -m pip --no-cache-dir install scikit-learn==0.24.1
#
# pandas==0.23.4 lightgbm==2.2.2
#
RUN python3 -m pip --no-cache-dir install pandas==0.23.4 lightgbm==2.2.2
RUN python3 -m pip --no-cache-dir install pandas==1.1 lightgbm==2.2.2

#
# Install NNI
Expand Down
45 changes: 21 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,11 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* ML Platform owners who want to **support AutoML in their platform**.

## **What's NEW!** &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>
* **New release**: [v2.2 is available](https://github.com/microsoft/nni/releases) - _released on April-26-2021_
* **New demo available**: [Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw) | [Bilibili 入口](https://space.bilibili.com/1649051673) - _last updated on Apr-21-2021_

* **New use case sharing**: [Cost-effective Hyper-parameter Tuning using AdaptDL with NNI](https://medium.com/casl-project/cost-effective-hyper-parameter-tuning-using-adaptdl-with-nni-e55642888761) - _posted on Feb-23-2021_
* **New release**: [v2.3 is available](https://github.com/microsoft/nni/releases) - _released on June-15-2021_
* **New demo available**: [Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw) | [Bilibili 入口](https://space.bilibili.com/1649051673) - _last updated on May-26-2021_
* **New webinar**: [Introducing Retiarii: A deep learning exploratory-training framework on NNI](https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-Live.html) - _scheduled on June-24-2021_
* **New community channel**: [Discussions](https://github.com/microsoft/nni/discussions)

## **NNI capabilities in a glance**

Expand Down Expand Up @@ -122,25 +123,19 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#SMAC">SMAC</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#MetisTuner">Metis Tuner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#GPTuner">GP Tuner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#DNGOTuner">DNGO Tuner</a></li>
</ul>
<b>RL Based</b>
<ul>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#PPOTuner">PPO Tuner</a> </li>
</ul>
</ul>
<a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">Neural Architecture Search</a>
<a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">Neural Architecture Search (Retiarii)</a>
<ul>
<ul>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ENAS.html">ENAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/DARTS.html">DARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/PDARTS.html">P-DARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/CDARTS.html">CDARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/SPOS.html">SPOS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#NetworkMorphism">Network Morphism</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/TextNAS.html">TextNAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Cream.html">Cream</a></li>
</ul>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ENAS.html">ENAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/DARTS.html">DARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/SPOS.html">SPOS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/FBNet.html">FBNet</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ExplorationStrategies.html">Reinforcement Learning</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ExplorationStrategies.html">Regularized Evolution</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">More...</a></li>
</ul>
<a href="https://nni.readthedocs.io/en/stable/Compression/Overview.html">Model Compression</a>
<ul>
Expand All @@ -153,11 +148,13 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#simulatedannealing-pruner">SimulatedAnnealing Pruner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#admm-pruner">ADMM Pruner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#autocompress-pruner">AutoCompress Pruner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Overview.html">More...</a></li>
</ul>
<b>Quantization</b>
<ul>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#qat-quantizer">QAT Quantizer</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#dorefa-quantizer">DoReFa Quantizer</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#bnn-quantizer">BNN Quantizer</a></li>
</ul>
</ul>
<a href="https://nni.readthedocs.io/en/stable/FeatureEngineering/Overview.html">Feature Engineering (Beta)</a>
Expand Down Expand Up @@ -207,6 +204,8 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/CustomizeTuner.html">CustomizeTuner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Assessor/CustomizeAssessor.html">CustomizeAssessor</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tutorial/InstallCustomizedAlgos.html">Install Customized Algorithms as Builtin Tuners/Assessors/Advisors</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/QuickStart.html#define-your-model-space">Define NAS Model Space</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ApiReference.html">NAS/Retiarii APIs</a></li>
</ul>
</td>
<td style="border-top:#FF0000 solid 0px;">
Expand Down Expand Up @@ -252,7 +251,7 @@ Note:
* Download the examples via clone the source code.

```bash
git clone -b v2.2 https://github.com/Microsoft/nni.git
git clone -b v2.3 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down Expand Up @@ -299,10 +298,7 @@ You can use these commands to get more information about the experiment

* Open the `Web UI url` in your browser, you can view detailed information of the experiment and all the submitted trial jobs as shown below. [Here](https://nni.readthedocs.io/en/stable/Tutorial/WebUI.html) are more Web UI pages.

<table style="border: none">
<th><img src="./docs/img/webui-img/full-oview.png" alt="drawing" width="395" height="300"/></th>
<th><img src="./docs/img/webui-img/full-detail.png" alt="drawing" width="410" height="300"/></th>
</table>
<img src="docs/static/img/webui.gif" alt="webui" width="100%"/>

## **Releases and Contributing**
NNI has a monthly release cycle (major releases). Please let us know if you encounter a bug by [filling an issue](https://github.com/microsoft/nni/issues/new/choose).
Expand All @@ -320,6 +316,7 @@ We appreciate all contributions and thank all the contributors!

## **Feedback**
* [File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
* Open or participate in a [discussion](https://github.com/microsoft/nni/discussions).
* Discuss on the NNI [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) in NNI.

Join IM discussion groups:
Expand Down
4 changes: 3 additions & 1 deletion dependencies/required.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ hyperopt == 0.1.2
json_tricks
netifaces
psutil
ruamel.yaml
pyyaml
requests
responses
schema
Expand All @@ -19,3 +19,5 @@ numpy < 1.20 ; sys_platform != "win32" and python_version < "3.7"
numpy ; sys.platform != "win32" and python_version >= "3.7"
scipy < 1.6 ; python_version < "3.7"
scipy ; python_version >= "3.7"
pandas < 1.2 ; python_version < "3.7"
matplotlib < 3.4 ; python_version < "3.7"
3 changes: 3 additions & 0 deletions dependencies/required_extra.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,6 @@ statsmodels==0.12.0

# PPOTuner
gym

# DNGO
pybnn
2 changes: 1 addition & 1 deletion docs/en_US/CommunitySharings/AutoCompletion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Step 1. Download ``bash-completion``
cd ~
wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.2``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.3``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.

.. cannot find :githublink:`here <tools/bash-completion>`.
Expand Down
119 changes: 119 additions & 0 deletions docs/en_US/Compression/AutoCompression.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
Auto Compression with NNI Experiment
====================================

If you want to compress your model, but don't know what compression algorithm to choose, or don't know what sparsity is suitable for your model, or just want to try more possibilities, auto compression may help you.
Users can choose different compression algorithms and define the algorithms' search space, then auto compression will launch an NNI experiment and try different compression algorithms with varying sparsity automatically.
Of course, in addition to the sparsity rate, users can also introduce other related parameters into the search space.
If you don't know what is search space or how to write search space, `this <./Tutorial/SearchSpaceSpec.rst>`__ is for your reference.
Auto compression using experience is similar to the NNI experiment in python.
The main differences are as follows:

* Use a generator to help generate search space object.
* Need to provide the model to be compressed, and the model should have already been pre-trained.
* No need to set ``trial_command``, additional need to set ``auto_compress_module`` as ``AutoCompressionExperiment`` input.

Generate search space
---------------------

Due to the extensive use of nested search space, we recommend a using generator to configure search space.
The following is an example. Using ``add_config()`` add subconfig, then ``dumps()`` search space dict.

.. code-block:: python
from nni.algorithms.compression.pytorch.auto_compress import AutoCompressionSearchSpaceGenerator
generator = AutoCompressionSearchSpaceGenerator()
generator.add_config('level', [
{
"sparsity": {
"_type": "uniform",
"_value": [0.01, 0.99]
},
'op_types': ['default']
}
])
generator.add_config('qat', [
{
'quant_types': ['weight', 'output'],
'quant_bits': {
'weight': 8,
'output': 8
},
'op_types': ['Conv2d', 'Linear']
}])
search_space = generator.dumps()
Now we support the following pruners and quantizers:

.. code-block:: python
PRUNER_DICT = {
'level': LevelPruner,
'slim': SlimPruner,
'l1': L1FilterPruner,
'l2': L2FilterPruner,
'fpgm': FPGMPruner,
'taylorfo': TaylorFOWeightFilterPruner,
'apoz': ActivationAPoZRankFilterPruner,
'mean_activation': ActivationMeanRankFilterPruner
}
QUANTIZER_DICT = {
'naive': NaiveQuantizer,
'qat': QAT_Quantizer,
'dorefa': DoReFaQuantizer,
'bnn': BNNQuantizer
}
Provide user model for compression
----------------------------------

Users need to inherit ``AbstractAutoCompressionModule`` and override the abstract class function.

.. code-block:: python
from nni.algorithms.compression.pytorch.auto_compress import AbstractAutoCompressionModule
class AutoCompressionModule(AbstractAutoCompressionModule):
@classmethod
def model(cls) -> nn.Module:
...
return _model
@classmethod
def evaluator(cls) -> Callable[[nn.Module], float]:
...
return _evaluator
Users need to implement at least ``model()`` and ``evaluator()``.
If you use iterative pruner, you need to additional implement ``optimizer_factory()``, ``criterion()`` and ``sparsifying_trainer()``.
If you want to finetune the model after compression, you need to implement ``optimizer_factory()``, ``criterion()``, ``post_compress_finetuning_trainer()`` and ``post_compress_finetuning_epochs()``.
The ``optimizer_factory()`` should return a factory function, the input is an iterable variable, i.e. your ``model.parameters()``, and the output is an optimizer instance.
The two kinds of ``trainer()`` should return a trainer with input ``model, optimizer, criterion, current_epoch``.
The full abstract interface refers to :githublink:`interface.py <nni/algorithms/compression/pytorch/auto_compress/interface.py>`.
An example of ``AutoCompressionModule`` implementation refers to :githublink:`auto_compress_module.py <examples/model_compress/auto_compress/torch/auto_compress_module.py>`.

Launch NNI experiment
---------------------

Similar to launch from python, the difference is no need to set ``trial_command`` and put the user-provided ``AutoCompressionModule`` as ``AutoCompressionExperiment`` input.

.. code-block:: python
from pathlib import Path
from nni.algorithms.compression.pytorch.auto_compress import AutoCompressionExperiment
from auto_compress_module import AutoCompressionModule
experiment = AutoCompressionExperiment(AutoCompressionModule, 'local')
experiment.config.experiment_name = 'auto compression torch example'
experiment.config.trial_concurrency = 1
experiment.config.max_trial_number = 10
experiment.config.search_space = search_space
experiment.config.trial_code_directory = Path(__file__).parent
experiment.config.tuner.name = 'TPE'
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
experiment.config.training_service.use_active_gpu = True
experiment.run(8088)
75 changes: 0 additions & 75 deletions docs/en_US/Compression/AutoPruningUsingTuners.rst

This file was deleted.

6 changes: 3 additions & 3 deletions docs/en_US/Compression/DependencyAware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ To enable the dependency-aware mode for ``L1FilterPruner``\ :
# for FPGMPruner
# pruner = FPGMPruner(model, config_list, dependency_aware=True, dummy_input=dummy_input)
# for ActivationAPoZRankFilterPruner
# pruner = ActivationAPoZRankFilterPruner(model, config_list, statistics_batch_num=1, , dependency_aware=True, dummy_input=dummy_input)
# pruner = ActivationAPoZRankFilterPruner(model, config_list, optimizer, trainer, criterion, sparsifying_training_batches=1, dependency_aware=True, dummy_input=dummy_input)
# for ActivationMeanRankFilterPruner
# pruner = ActivationMeanRankFilterPruner(model, config_list, statistics_batch_num=1, dependency_aware=True, dummy_input=dummy_input)
# pruner = ActivationMeanRankFilterPruner(model, config_list, optimizer, trainer, criterion, sparsifying_training_batches=1, dependency_aware=True, dummy_input=dummy_input)
# for TaylorFOWeightFilterPruner
# pruner = TaylorFOWeightFilterPruner(model, config_list, statistics_batch_num=1, dependency_aware=True, dummy_input=dummy_input)
# pruner = TaylorFOWeightFilterPruner(model, config_list, optimizer, trainer, criterion, sparsifying_training_batches=1, dependency_aware=True, dummy_input=dummy_input)
pruner.compress()
Expand Down
Loading

0 comments on commit f9dbdb4

Please sign in to comment.