-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QAT recommendation #2451
Comments
@ttyio ^ ^ I would also like to learn about this |
@shuyuan-wang ,
By call
The calibration is turned off, and you can call
to do fine-tuning without change scale. |
Thanks |
Sorry, I have a question. During the fine-tuning process of QAT, the model weights will change. If the scale of the weights remains unchanged, is the scale calculated during calibration still reasonable? Should the scale never be updated during fine-tuning? If it can be updated, how should the scale be updated during training? Are there any recommended update strategies? |
I would say during QAT, the weight is adjusted based on the the newly calculated scale during PTQ |
In the document docs, It recommends not change quantization representation (scale) during training, at least not too frequently. How exactly do I not change scale during QAT training?
The text was updated successfully, but these errors were encountered: