Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nan problem of Qwen2-72B quantization #519

Merged
merged 1 commit into from
Jun 24, 2024

Conversation

baoyf4244
Copy link
Contributor

change weight scaling formulation, fix nan problem when quantize Qwen2-72B model

For #498 , casper-hansen #516 and Qwen team yangyo@32bf03c?diff=split&w=1 fix this problem by set nan or inf to 1 to walkaround it, But I think this is unreasonble.

I found the occurence of nan was caused by the process of weight scaling, where part of some weights like mlp.gate_proj, mlp.up_proj exceed the range of float16. So those weights become to 0 when loading model. The nans occur when cacluating 0/0. Add some small value to denominator can solve this problem.

@casper-hansen
Copy link
Owner

Hi @baoyf4244, thanks for the fix! This seems much more appropriate than skipping the scaling of certain values.

@casper-hansen casper-hansen merged commit c53cc7e into casper-hansen:main Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants