-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torchbench hf_* models fail on both TPU and GPU #6864
Comments
are we using nightly HF as well? |
No, Torchbench is using transformers==4.38. We use the same config. |
The failure is related to this PR: #6792. Here is the chat from @JackCaoG :
Waiting for pending fix fro @alanwaketan. |
Sorry, double checked that the failure should be due to the changes in torchbench. Let me confirm which PR and will make an update. |
The issue is related to the changes in the torchbench upstream pytorch/benchmark#2197. In torchbenchmark/util/framework/huggingface/model_factory.py,
return item self.example_inputs becomes a dict instead of the list of tensor. |
🐛 Bug
Torchbench models like hf_Albert, hf_Bart, hf_Bert, hf_Bert_large, hf_BigBird, hf_DistilBert, hf_GPT2, hf_GPT2_large, hf_Longformer, hf_Reformer, hf_T5, hf_T5_base, hf_T5_generate, hf_T5_large all failed recently wit hte
To Reproduce
Steps to reproduce the behavior:
Error log:
The text was updated successfully, but these errors were encountered: