Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance Auto-Round #870

Merged
merged 10 commits into from
Sep 18, 2024
Merged

Enhance Auto-Round #870

merged 10 commits into from
Sep 18, 2024

Conversation

yiliu30
Copy link
Contributor

@yiliu30 yiliu30 commented Sep 11, 2024

This PR includes several enhancements for Auto-round:

1. Bring torch.compile to speed up the Auto-round optimization process:

python torchao/prototype/autoround/autoround_llm.py -c
  • meta-llama/Llama-2-7b-chat-hf, about 1.29x , 27 sec/block -> 21 sec/block
  • meta-llama/Meta-Llama-3.1-8B-Instruct, about 1.23x, 32 sec/block -> 26 sec/block

yiliu30#18

2. Add AO_USE_DETERMINISTIC_ALGORITHMS for reproducing the lm-eval results:

AO_USE_DETERMINISTIC_ALGORITHMS=1 python torchao/prototype/autoround/eval_autoround.py

yiliu30#19

3. Expose gradient_accumulate_steps to users and update results:

  • For meta-llama/Meta-Llama-3.1-8B-Instruct, here are the updated results:
Avg. Mmlu Piqa Winogrande Hellaswag Lambada_openai
bf16 0.7080 0.6783 0.8003 0.7403 0.5910 0.7303
torchao-int4wo 0.6883 0.6363 0.7938 0.7348 0.5784 0.6980
autoround-4bit 0.6996 0.6669 0.7916 0.7285 0.5846 0.7262
autoround-4bit* 0.7010 0.6621 0.7976 0.7316 0.5847 0.7291

For more details, please refer README.md

yiliu30#20

TODO:

cc @wenhuach21 @thuang6

Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Copy link

pytorch-bot bot commented Sep 11, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/870

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 47103a5 with merge base bd264f9 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 11, 2024
@wenhuach21
Copy link

The reason why accumualte_gradient is better is:
to better align with the ao's API, we switched from random sampling in our original training implementation to using fixed samples within each batch, which reduces flexibility. However, gradient accumulation helps to partially recover this flexibility.

@yiliu30
Copy link
Contributor Author

yiliu30 commented Sep 11, 2024

Hi @jerryzh168, could you please take a look, thanks!

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
Signed-off-by: yiliu30 <yi4.liu@intel.com>
@jerryzh168 jerryzh168 merged commit 85a6113 into pytorch:main Sep 18, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants