-
Notifications
You must be signed in to change notification settings - Fork 344
Parameterize AWQ config, device in unit test #3000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Introduce a prototype AWQ/SmoothQuant benchmark within lm_eval (vLLM). Related Issue/PR: pytorch#2815 Test plan: A toy tokenizer model is introduced to validate `TransformerEvalWrapper`. Toy linear models are updated to be compatible with the tokenizer model. Updated unit tests (`test_awq.py` and `test_smoothquant.py`) address the above changes. Future plan: This PR doesn't address latency and consumption. We need to expand metrics for comprehensive benchmarks within AWQ/SmoothQuant.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3000
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks fine, also we could just test on real models through the release scripts I think
Removed the tokenizer because too many errors made it reproducible, mostly dispatch errors; please check the updated overview. |
Summary:
Update parameter
device_to_base_configs
to@parametrize("device,base_config", configs)
format to make it better structured.Test plan:
test/prototype/test_awq.py
Future plan:
AWQ/SmoothQuant benchmark within the vLLM ecosystem. See #2815 for more info