-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Pallas] Introduce GMM(torch.autograd.Function) #7152
Conversation
@@ -374,6 +374,81 @@ def test_gmm_backward(self): | |||
# Make sure gmm doesn't fallback. | |||
self.assertNotIn("aten::", met.short_metrics_report()) | |||
|
|||
@unittest.skipIf(xr.device_type() != 'TPU', "This test only works on TPU.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you need TPU version check here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol, good question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v2 is pretty happy on the tree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting.. I thought pallas is not supported on v2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gmm is just mm... The kernel is simple... Other fuses softmax, etc...
Thanks Jack for approving. |
Skip GPU tests to move fast. |
Summary:
This pull request make GMM as a torch.autograd.Function such that we can use torch.autograd.backward instead of manual backpropagation.
Test Plan:
python test/test_gmm.py