Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #299

Merged
merged 61 commits into from
Jun 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
727ee1c
Fix IT on model compression (#3673)
ultmaster May 26, 2021
964d9a9
fix url and api conflict (#3672)
acured May 27, 2021
c05a922
update prefix rest api format and fix getPrefix function bug (#3674)
Lijiaoa May 27, 2021
d1450b4
fix compression pipeline (#3678)
J-shang May 27, 2021
e349b44
Add TensorFlow slim pruner (#3614)
liuzhe-lz May 27, 2021
277e63f
Support 3rd-party training service (#3662)
liuzhe-lz May 27, 2021
684005d
fix no log from subprocess on trial (#3653)
acured May 27, 2021
916267d
Add trial stdout button on local mode (#3690)
Lijiaoa May 27, 2021
9b0bc37
[#3507 follow up] update doc (#3688)
J-shang May 27, 2021
580c597
DNGO tuner (#3479)
98may May 28, 2021
a8879dd
[Model Compression] auto compression (#3631)
J-shang May 28, 2021
5e33520
Documentation typo fix in DoReFa compression (#3693)
Erfandarzi May 28, 2021
a7a3ef9
HPO Benchmark doc and setup script fix (#3689)
xiaowu0162 May 28, 2021
7075a83
Bump dns-packet from 1.3.1 to 1.3.4 in /ts/webui (#3694)
dependabot[bot] May 28, 2021
b955ac9
Use PyYAML instead of ruamel.yaml (#3702)
ultmaster Jun 1, 2021
969fac0
Fix pandas version in ubuntu-legacy pipeline (#3704)
ultmaster Jun 1, 2021
f579f17
Fix DNGO tuner class name (#3707)
ultmaster Jun 2, 2021
42337dc
Speed up model compression pipeline (#3695)
J-shang Jun 2, 2021
259aee7
Update tf pruner example (#3708)
J-shang Jun 2, 2021
b7c91e7
Fix bugs and lints in nnictl (#3712)
liuzhe-lz Jun 2, 2021
521f191
Fix a logging related bug (#3705)
liuzhe-lz Jun 3, 2021
e269db6
Add benchmark support for DNGOTuner (#3720)
xiaowu0162 Jun 3, 2021
ed40180
Fix typo in nni/experiment/launcher.py (#3701)
kvartet Jun 3, 2021
e67fea2
webui fix search bug (#3715)
Lijiaoa Jun 3, 2021
7eaf9c2
Add start_epoch configuration in PFLD example (#3709)
ultmaster Jun 3, 2021
a284f71
Fix a few bugs in Retiarii and upgrade Dockerfile (#3713)
ultmaster Jun 3, 2021
9f467f3
Bump ws from 7.3.0 to 7.4.6 in /ts/nni_manager (#3699)
dependabot[bot] Jun 3, 2021
0ab7e37
Fix docker image version (#3722)
ultmaster Jun 4, 2021
b4da4a7
fix typo in compression (#3727)
J-shang Jun 4, 2021
e82731f
Use in-cluster config instead of kubeconfig when running NNI from wit…
rmfan Jun 4, 2021
d9dd29f
[webui] update search doc (#3723)
Lijiaoa Jun 4, 2021
6b52fb1
Fix 3rd-party training service bug (#3726)
liuzhe-lz Jun 7, 2021
d1c8d84
Fix a few issues in Retiarii (#3725)
ultmaster Jun 7, 2021
b11a4c3
Fix frameworkController FrameworkControllerTrialConfigTemplate constr…
SparkSnail Jun 8, 2021
700026c
fix optimize_mode issue (#3731)
Lijiaoa Jun 8, 2021
c4d449c
Catch clean up error to prevent crash (#3729)
liuzhe-lz Jun 8, 2021
eb65bc3
Port trial examples' config file to v2 (#3721)
liuzhe-lz Jun 8, 2021
d1b1e7b
Update config v2 doc (#3711)
kvartet Jun 8, 2021
159f9b3
update sharedstorage config to v2 (#3733)
J-shang Jun 9, 2021
4c1183c
Fix hidden nodes' remove in Retiarii (#3736)
ultmaster Jun 9, 2021
d37216e
Update jupyter notebook example (#3700)
kvartet Jun 9, 2021
642d30f
Bump merge-deep from 3.0.2 to 3.0.3 in /ts/webui (#3737)
dependabot[bot] Jun 9, 2021
70f0b6a
update README webui img to gif (#3735)
Lijiaoa Jun 9, 2021
470a719
Bump ws from 5.2.2 to 5.2.3 in /ts/webui (#3741)
dependabot[bot] Jun 9, 2021
54a89dd
Update QuickStart.rst (#3732)
AmitShtober Jun 10, 2021
40826ea
Fix a critical bug in example (#3810)
liuzhe-lz Jun 11, 2021
9e70639
fix foreground log (#3808)
J-shang Jun 11, 2021
95f4c86
Adapt system auto tuning examples to NNI V2 (#3784)
XiaotianGao Jun 11, 2021
4dc8c03
Make brute-force strategies budget aware (#3805)
ultmaster Jun 11, 2021
0247be5
Remove deepcopy in Retiarii evaluator (#3812)
ultmaster Jun 11, 2021
4146c71
[Retiarii] refactor of NAS doc and make python engine default (#3785)
QuanluZhang Jun 13, 2021
cf95cfc
Add release note and update versions to v2.3 (#3738)
kvartet Jun 15, 2021
8450724
update readme (#3827)
QuanluZhang Jun 15, 2021
71fc4da
Merge pull request #3836 from microsoft/v2.3
ultmaster Jun 16, 2021
ca980ee
Add model wrapper to other Retiarii examples (#3838)
ultmaster Jun 18, 2021
728f549
Update experiment_config.yml (#3825)
krvaibhaw Jun 18, 2021
91ef554
change file name validation to warning (#3843)
liuzhe-lz Jun 21, 2021
b222543
flops counter formatting fix (#3837)
xiaowu0162 Jun 21, 2021
ef15fc8
Bump node.js version to v16 (#3828)
liuzhe-lz Jun 21, 2021
009722a
Revert notebook example to base execution engine (#3852)
ultmaster Jun 21, 2021
27e123d
fix data device type bug (#3856)
linbinskn Jun 22, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
15 changes: 7 additions & 8 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.

FROM nvidia/cuda:9.2-cudnn7-runtime-ubuntu18.04
FROM nvidia/cuda:10.2-cudnn8-runtime-ubuntu18.04

ARG NNI_RELEASE

Expand Down Expand Up @@ -43,24 +43,23 @@ RUN ln -s python3 /usr/bin/python
#
RUN python3 -m pip install --upgrade pip==20.2.4 setuptools==50.3.2

# numpy 1.14.3 scipy 1.1.0
RUN python3 -m pip --no-cache-dir install numpy==1.14.3 scipy==1.1.0
# numpy 1.19.5 scipy 1.5.4
RUN python3 -m pip --no-cache-dir install numpy==1.19.5 scipy==1.5.4

#
# TensorFlow
#
RUN python3 -m pip --no-cache-dir install tensorflow==2.3.1

#
# Keras 2.1.6
# Keras
#
RUN python3 -m pip --no-cache-dir install Keras==2.1.6
RUN python3 -m pip --no-cache-dir install Keras==2.4.3

#
# PyTorch
#
RUN python3 -m pip --no-cache-dir install torch==1.6.0
RUN python3 -m pip install torchvision==0.7.0
RUN python3 -m pip --no-cache-dir install torch==1.7.1 torchvision==0.8.2 pytorch-lightning==1.3.3

#
# sklearn 0.24.1
Expand All @@ -70,7 +69,7 @@ RUN python3 -m pip --no-cache-dir install scikit-learn==0.24.1
#
# pandas==0.23.4 lightgbm==2.2.2
#
RUN python3 -m pip --no-cache-dir install pandas==0.23.4 lightgbm==2.2.2
RUN python3 -m pip --no-cache-dir install pandas==1.1 lightgbm==2.2.2

#
# Install NNI
Expand Down
45 changes: 21 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,11 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* ML Platform owners who want to **support AutoML in their platform**.

## **What's NEW!** &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>
* **New release**: [v2.2 is available](https://github.com/microsoft/nni/releases) - _released on April-26-2021_
* **New demo available**: [Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw) | [Bilibili 入口](https://space.bilibili.com/1649051673) - _last updated on Apr-21-2021_

* **New use case sharing**: [Cost-effective Hyper-parameter Tuning using AdaptDL with NNI](https://medium.com/casl-project/cost-effective-hyper-parameter-tuning-using-adaptdl-with-nni-e55642888761) - _posted on Feb-23-2021_
* **New release**: [v2.3 is available](https://github.com/microsoft/nni/releases) - _released on June-15-2021_
* **New demo available**: [Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw) | [Bilibili 入口](https://space.bilibili.com/1649051673) - _last updated on May-26-2021_
* **New webinar**: [Introducing Retiarii: A deep learning exploratory-training framework on NNI](https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-Live.html) - _scheduled on June-24-2021_
* **New community channel**: [Discussions](https://github.com/microsoft/nni/discussions)

## **NNI capabilities in a glance**

Expand Down Expand Up @@ -122,25 +123,19 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#SMAC">SMAC</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#MetisTuner">Metis Tuner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#GPTuner">GP Tuner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#DNGOTuner">DNGO Tuner</a></li>
</ul>
<b>RL Based</b>
<ul>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#PPOTuner">PPO Tuner</a> </li>
</ul>
</ul>
<a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">Neural Architecture Search</a>
<a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">Neural Architecture Search (Retiarii)</a>
<ul>
<ul>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ENAS.html">ENAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/DARTS.html">DARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/PDARTS.html">P-DARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/CDARTS.html">CDARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/SPOS.html">SPOS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/BuiltinTuner.html#NetworkMorphism">Network Morphism</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/TextNAS.html">TextNAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Cream.html">Cream</a></li>
</ul>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ENAS.html">ENAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/DARTS.html">DARTS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/SPOS.html">SPOS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/FBNet.html">FBNet</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ExplorationStrategies.html">Reinforcement Learning</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ExplorationStrategies.html">Regularized Evolution</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/Overview.html">More...</a></li>
</ul>
<a href="https://nni.readthedocs.io/en/stable/Compression/Overview.html">Model Compression</a>
<ul>
Expand All @@ -153,11 +148,13 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#simulatedannealing-pruner">SimulatedAnnealing Pruner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#admm-pruner">ADMM Pruner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Pruner.html#autocompress-pruner">AutoCompress Pruner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Overview.html">More...</a></li>
</ul>
<b>Quantization</b>
<ul>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#qat-quantizer">QAT Quantizer</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#dorefa-quantizer">DoReFa Quantizer</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Compression/Quantizer.html#bnn-quantizer">BNN Quantizer</a></li>
</ul>
</ul>
<a href="https://nni.readthedocs.io/en/stable/FeatureEngineering/Overview.html">Feature Engineering (Beta)</a>
Expand Down Expand Up @@ -207,6 +204,8 @@ Within the following table, we summarized the current NNI capabilities, we are g
<li><a href="https://nni.readthedocs.io/en/stable/Tuner/CustomizeTuner.html">CustomizeTuner</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Assessor/CustomizeAssessor.html">CustomizeAssessor</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/Tutorial/InstallCustomizedAlgos.html">Install Customized Algorithms as Builtin Tuners/Assessors/Advisors</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/QuickStart.html#define-your-model-space">Define NAS Model Space</a></li>
<li><a href="https://nni.readthedocs.io/en/stable/NAS/ApiReference.html">NAS/Retiarii APIs</a></li>
</ul>
</td>
<td style="border-top:#FF0000 solid 0px;">
Expand Down Expand Up @@ -252,7 +251,7 @@ Note:
* Download the examples via clone the source code.

```bash
git clone -b v2.2 https://github.com/Microsoft/nni.git
git clone -b v2.3 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down Expand Up @@ -299,10 +298,7 @@ You can use these commands to get more information about the experiment

* Open the `Web UI url` in your browser, you can view detailed information of the experiment and all the submitted trial jobs as shown below. [Here](https://nni.readthedocs.io/en/stable/Tutorial/WebUI.html) are more Web UI pages.

<table style="border: none">
<th><img src="./docs/img/webui-img/full-oview.png" alt="drawing" width="395" height="300"/></th>
<th><img src="./docs/img/webui-img/full-detail.png" alt="drawing" width="410" height="300"/></th>
</table>
<img src="docs/static/img/webui.gif" alt="webui" width="100%"/>

## **Releases and Contributing**
NNI has a monthly release cycle (major releases). Please let us know if you encounter a bug by [filling an issue](https://github.com/microsoft/nni/issues/new/choose).
Expand All @@ -320,6 +316,7 @@ We appreciate all contributions and thank all the contributors!

## **Feedback**
* [File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
* Open or participate in a [discussion](https://github.com/microsoft/nni/discussions).
* Discuss on the NNI [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) in NNI.

Join IM discussion groups:
Expand Down
4 changes: 3 additions & 1 deletion dependencies/required.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ hyperopt == 0.1.2
json_tricks
netifaces
psutil
ruamel.yaml
pyyaml
requests
responses
schema
Expand All @@ -19,3 +19,5 @@ numpy < 1.20 ; sys_platform != "win32" and python_version < "3.7"
numpy ; sys.platform != "win32" and python_version >= "3.7"
scipy < 1.6 ; python_version < "3.7"
scipy ; python_version >= "3.7"
pandas < 1.2 ; python_version < "3.7"
matplotlib < 3.4 ; python_version < "3.7"
3 changes: 3 additions & 0 deletions dependencies/required_extra.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,6 @@ statsmodels==0.12.0

# PPOTuner
gym

# DNGO
pybnn
2 changes: 1 addition & 1 deletion docs/en_US/CommunitySharings/AutoCompletion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Step 1. Download ``bash-completion``
cd ~
wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion

Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.2``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.3``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.

.. cannot find :githublink:`here <tools/bash-completion>`.

Expand Down
119 changes: 119 additions & 0 deletions docs/en_US/Compression/AutoCompression.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
Auto Compression with NNI Experiment
====================================

If you want to compress your model, but don't know what compression algorithm to choose, or don't know what sparsity is suitable for your model, or just want to try more possibilities, auto compression may help you.
Users can choose different compression algorithms and define the algorithms' search space, then auto compression will launch an NNI experiment and try different compression algorithms with varying sparsity automatically.
Of course, in addition to the sparsity rate, users can also introduce other related parameters into the search space.
If you don't know what is search space or how to write search space, `this <./Tutorial/SearchSpaceSpec.rst>`__ is for your reference.
Auto compression using experience is similar to the NNI experiment in python.
The main differences are as follows:

* Use a generator to help generate search space object.
* Need to provide the model to be compressed, and the model should have already been pre-trained.
* No need to set ``trial_command``, additional need to set ``auto_compress_module`` as ``AutoCompressionExperiment`` input.

Generate search space
---------------------

Due to the extensive use of nested search space, we recommend a using generator to configure search space.
The following is an example. Using ``add_config()`` add subconfig, then ``dumps()`` search space dict.

.. code-block:: python

from nni.algorithms.compression.pytorch.auto_compress import AutoCompressionSearchSpaceGenerator

generator = AutoCompressionSearchSpaceGenerator()
generator.add_config('level', [
{
"sparsity": {
"_type": "uniform",
"_value": [0.01, 0.99]
},
'op_types': ['default']
}
])
generator.add_config('qat', [
{
'quant_types': ['weight', 'output'],
'quant_bits': {
'weight': 8,
'output': 8
},
'op_types': ['Conv2d', 'Linear']
}])

search_space = generator.dumps()

Now we support the following pruners and quantizers:

.. code-block:: python

PRUNER_DICT = {
'level': LevelPruner,
'slim': SlimPruner,
'l1': L1FilterPruner,
'l2': L2FilterPruner,
'fpgm': FPGMPruner,
'taylorfo': TaylorFOWeightFilterPruner,
'apoz': ActivationAPoZRankFilterPruner,
'mean_activation': ActivationMeanRankFilterPruner
}

QUANTIZER_DICT = {
'naive': NaiveQuantizer,
'qat': QAT_Quantizer,
'dorefa': DoReFaQuantizer,
'bnn': BNNQuantizer
}

Provide user model for compression
----------------------------------

Users need to inherit ``AbstractAutoCompressionModule`` and override the abstract class function.

.. code-block:: python

from nni.algorithms.compression.pytorch.auto_compress import AbstractAutoCompressionModule

class AutoCompressionModule(AbstractAutoCompressionModule):
@classmethod
def model(cls) -> nn.Module:
...
return _model

@classmethod
def evaluator(cls) -> Callable[[nn.Module], float]:
...
return _evaluator

Users need to implement at least ``model()`` and ``evaluator()``.
If you use iterative pruner, you need to additional implement ``optimizer_factory()``, ``criterion()`` and ``sparsifying_trainer()``.
If you want to finetune the model after compression, you need to implement ``optimizer_factory()``, ``criterion()``, ``post_compress_finetuning_trainer()`` and ``post_compress_finetuning_epochs()``.
The ``optimizer_factory()`` should return a factory function, the input is an iterable variable, i.e. your ``model.parameters()``, and the output is an optimizer instance.
The two kinds of ``trainer()`` should return a trainer with input ``model, optimizer, criterion, current_epoch``.
The full abstract interface refers to :githublink:`interface.py <nni/algorithms/compression/pytorch/auto_compress/interface.py>`.
An example of ``AutoCompressionModule`` implementation refers to :githublink:`auto_compress_module.py <examples/model_compress/auto_compress/torch/auto_compress_module.py>`.

Launch NNI experiment
---------------------

Similar to launch from python, the difference is no need to set ``trial_command`` and put the user-provided ``AutoCompressionModule`` as ``AutoCompressionExperiment`` input.

.. code-block:: python

from pathlib import Path
from nni.algorithms.compression.pytorch.auto_compress import AutoCompressionExperiment

from auto_compress_module import AutoCompressionModule

experiment = AutoCompressionExperiment(AutoCompressionModule, 'local')
experiment.config.experiment_name = 'auto compression torch example'
experiment.config.trial_concurrency = 1
experiment.config.max_trial_number = 10
experiment.config.search_space = search_space
experiment.config.trial_code_directory = Path(__file__).parent
experiment.config.tuner.name = 'TPE'
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
experiment.config.training_service.use_active_gpu = True

experiment.run(8088)
75 changes: 0 additions & 75 deletions docs/en_US/Compression/AutoPruningUsingTuners.rst

This file was deleted.

6 changes: 3 additions & 3 deletions docs/en_US/Compression/DependencyAware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ To enable the dependency-aware mode for ``L1FilterPruner``\ :
# for FPGMPruner
# pruner = FPGMPruner(model, config_list, dependency_aware=True, dummy_input=dummy_input)
# for ActivationAPoZRankFilterPruner
# pruner = ActivationAPoZRankFilterPruner(model, config_list, statistics_batch_num=1, , dependency_aware=True, dummy_input=dummy_input)
# pruner = ActivationAPoZRankFilterPruner(model, config_list, optimizer, trainer, criterion, sparsifying_training_batches=1, dependency_aware=True, dummy_input=dummy_input)
# for ActivationMeanRankFilterPruner
# pruner = ActivationMeanRankFilterPruner(model, config_list, statistics_batch_num=1, dependency_aware=True, dummy_input=dummy_input)
# pruner = ActivationMeanRankFilterPruner(model, config_list, optimizer, trainer, criterion, sparsifying_training_batches=1, dependency_aware=True, dummy_input=dummy_input)
# for TaylorFOWeightFilterPruner
# pruner = TaylorFOWeightFilterPruner(model, config_list, statistics_batch_num=1, dependency_aware=True, dummy_input=dummy_input)
# pruner = TaylorFOWeightFilterPruner(model, config_list, optimizer, trainer, criterion, sparsifying_training_batches=1, dependency_aware=True, dummy_input=dummy_input)

pruner.compress()

Expand Down
Loading