-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
有关精度的问题 #135
Comments
不好意思,请使用https://github.com/VainF/Torch-Pruning/blob/v1.0/torch_pruning/pruner/algorithms/group_norm_pruner.py这一个v1.0版本的pruner。 我刚才跑了一遍代码,同样发现稀疏训练阶段性能掉的厉害(VAL LOSS偏高),造成这个现象的原因是后续更新里把bias也加入到稀疏训练中了。实际上bias稀疏存在问题,我们没有仔细测试就上传了,不好意思造成困扰。我们会尽快回退这部分。 v1.0的稀疏训练:
v1.1的稀疏训练(包括bias)
v1.1 替换成v.1.0的pruner (看起来运行正常)
|
My results with the latest version.
|
明白了,谢谢您的回复,我换成V1.0版本的代码试一试,不过这是影响剪枝后模型性能的原因对吗,那重新训练的精度也达不到预期,是什么原因啊 |
重新训练是指pretraining么?bias会同时影响regularization和finetuning两个阶段 |
嗯嗯,是的
在2023-04-02 ***@***.***写道:
明白了,谢谢您的回复,我换成V1.0版本的代码试一试,不过这是影响剪枝后模型性能的原因对吗,那重新训练的精度也达不到预期,是什么原因啊
重新训练是指pretraining么?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
您好,抱歉回复的比较迟,刚重新过了一遍pre-training => sparse learning => pruning => post-training的流程,好像并没有出现重新训练精度达不到预期的情况。 Pretraining [04/18 16:27:23] cifar10-resnet56 INFO: Epoch 197/200, Acc=0.9359, Val Loss=0.2645, lr=0.0001
[04/18 16:28:52] cifar10-resnet56 INFO: Epoch 198/200, Acc=0.9359, Val Loss=0.2621, lr=0.0001
[04/18 16:30:24] cifar10-resnet56 INFO: Epoch 199/200, Acc=0.9363, Val Loss=0.2626, lr=0.0001
[04/18 16:30:24] cifar10-resnet56 INFO: Best Acc=0.9376 Sparse Learning & Pruning [04/18 19:00:38] cifar10-global-group_sl-resnet56 INFO: Params: 0.86 M => 0.30 M (35.28%)
[04/18 19:00:38] cifar10-global-group_sl-resnet56 INFO: FLOPs: 127.12 M => 49.48 M (38.93%, 2.57X )
[04/18 19:00:38] cifar10-global-group_sl-resnet56 INFO: Acc: 0.9366 => 0.7992
[04/18 19:00:38] cifar10-global-group_sl-resnet56 INFO: Val Loss: 0.2264 => 0.7311
[04/18 19:00:38] cifar10-global-group_sl-resnet56 INFO: Finetuning... Post-Training [04/18 19:43:17] cifar10-global-group_sl-resnet56 INFO: Epoch 96/100, Acc=0.9363, Val Loss=0.2333, lr=0.0001
[04/18 19:43:33] cifar10-global-group_sl-resnet56 INFO: Epoch 97/100, Acc=0.9350, Val Loss=0.2335, lr=0.0001
[04/18 19:43:48] cifar10-global-group_sl-resnet56 INFO: Epoch 98/100, Acc=0.9361, Val Loss=0.2307, lr=0.0001
[04/18 19:44:03] cifar10-global-group_sl-resnet56 INFO: Epoch 99/100, Acc=0.9365, Val Loss=0.2326, lr=0.0001
[04/18 19:44:03] cifar10-global-group_sl-resnet56 INFO: Best Acc=0.9368 |
在下面的参数配置下,为什么重新训练和剪枝之后的结果都达不到预期呢,请问是哪里除了问题
The text was updated successfully, but these errors were encountered: