-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BAD VERSION2] Make module swap the main QAT flow again #1019
Conversation
Summary: Following #987, this commit makes module swap the main QAT flow today. We remove all tensor subclass fake quantize injection logic since this is not needed in both the long term and the short term plans for QAT. In the short term, we will continue to use a full module swap flow, and only migrate to the long term flow once there is general distributed support for tensor subclasses and when tensor subclass composability provides meaningful benefits. Test Plan: python test/quantization/test_qat.py [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1019
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 0756f39 with merge base 5a4857e (): BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
# | 8da4w QAT | | ||
# ================= | ||
|
||
def int8_dynamic_activation_int4_weight_fake_quantize(group_size=32): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we are not planning to use fake quant tensor subclass? I remember there was some benefits, can you write down the plan for this? maybe either in summary or code comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we decided on using fake quantize tensor subclass for the long term: #987?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This path is doing tensor subclass injection. I think we don't want to do that in the long term. E.g. what fp8 training does is use module swaps to insert the subclasses, like:
ao/torchao/float8/float8_linear.py
Line 500 in dec0313
weight_fp8 = hp_tensor_and_scale_to_float8( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh OK I thought AffineFakeQuantizedTensor
was removed. then sounds good
oh no, this was merged into the wrong branch, need to reopen again |
Reopened here: #1037 |
Stack from ghstack (oldest at bottom):
Summary: Following #987, this
commit makes module swap the main QAT flow today. We remove all
tensor subclass fake quantize injection logic since this is not
needed in both the long term and the short term plans for QAT.
In the short term, we will continue to use a full module swap
flow, and only migrate to the long term flow once there is
general distributed support for tensor subclasses and when
tensor subclass composability provides meaningful benefits.
Test Plan:
python test/quantization/test_qat.py