-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace torch einsum with opt_einsum #440
Conversation
This pull request was exported from Phabricator. Differential Revision: D37128344 |
Optimized einsums obtain significant trends towards an effective reduction in runtime. The most impact is from convolution and linear layers uniformly for all layer sizes. This is under 10 iterations of benchmarking. Runtime_x is from Torch einsums and runtime_y is from opt einsums (in table below). |
This pull request was exported from Phabricator. Differential Revision: D37128344 |
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: 961fb1e487542a4226ea09367911fdf88252b934
dcc4acf
to
e55464d
Compare
Attached a notebook of the benchmarking and runtime comparison analysis here : https://colab.research.google.com/drive/1WAjSYzyKg7UisShNNwD4FMv6tIMGCzuL#scrollTo=OY0TWxFuXRwt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Looks good to me. Please fix the lint error before merging.
requirements.txt
Outdated
@@ -1,3 +1,4 @@ | |||
numpy>=1.15 | |||
torch>=1.8 | |||
scipy>=1.2 | |||
opt-einsum==3.3.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opt-einsum==3.3.0 | |
opt-einsum>=3.3.0 |
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: 37f73220311c7a7e161479868ed0a5da3c3f54f2
e55464d
to
3c6b7e7
Compare
This pull request was exported from Phabricator. Differential Revision: D37128344 |
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: b03230542df2efa9036e52e8c869c91a79a1599e
This pull request was exported from Phabricator. Differential Revision: D37128344 |
3c6b7e7
to
c317bb0
Compare
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: e262b4a1199ea64b92c3bea60ddeb4c0363bce1d
This pull request was exported from Phabricator. Differential Revision: D37128344 |
c317bb0
to
c8c8b0c
Compare
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: 9afb0570011a48fce73ac41ee0497759e83f545b
c8c8b0c
to
a664e86
Compare
This pull request was exported from Phabricator. Differential Revision: D37128344 |
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: 9f01084e2af026019dfa955c8b5f38de22a0845d
a664e86
to
afe7bbb
Compare
This comment was marked as duplicate.
This comment was marked as duplicate.
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: 891c1cc3e1348a4965a068d6fd1375eb584805b9
afe7bbb
to
f43033d
Compare
This comment was marked as duplicate.
This comment was marked as duplicate.
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Differential Revision: D37128344 fbshipit-source-id: 8753f406a325125a346530e57ef580d4a0abd1c9
f43033d
to
a4c62ea
Compare
This comment was marked as duplicate.
This comment was marked as duplicate.
Summary: Pull Request resolved: pytorch#440 We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it. Reviewed By: karthikprasad Differential Revision: D37128344 fbshipit-source-id: 58bd594a349792f9112a356e3112f9bf5ded0c71
a4c62ea
to
106325f
Compare
This pull request was exported from Phabricator. Differential Revision: D37128344 |
Differential Revision: D37128344