Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace torch einsum with opt_einsum #440

Closed
wants to merge 1 commit into from

Conversation

sayanghosh
Copy link
Contributor

Differential Revision: D37128344

@facebook-github-bot facebook-github-bot added CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported labels Jun 15, 2022
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

@sayanghosh
Copy link
Contributor Author

sayanghosh commented Jun 15, 2022

Optimized einsums obtain significant trends towards an effective reduction in runtime. The most impact is from convolution and linear layers uniformly for all layer sizes. This is under 10 iterations of benchmarking. Runtime_x is from Torch einsums and runtime_y is from opt einsums (in table below).

image

image

image

image

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 23, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: 961fb1e487542a4226ea09367911fdf88252b934
@sayanghosh
Copy link
Contributor Author

Attached a notebook of the benchmarking and runtime comparison analysis here : https://colab.research.google.com/drive/1WAjSYzyKg7UisShNNwD4FMv6tIMGCzuL#scrollTo=OY0TWxFuXRwt

Copy link
Contributor

@karthikprasad karthikprasad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Looks good to me. Please fix the lint error before merging.

requirements.txt Outdated
@@ -1,3 +1,4 @@
numpy>=1.15
torch>=1.8
scipy>=1.2
opt-einsum==3.3.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
opt-einsum==3.3.0
opt-einsum>=3.3.0

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: 37f73220311c7a7e161479868ed0a5da3c3f54f2
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: b03230542df2efa9036e52e8c869c91a79a1599e
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: e262b4a1199ea64b92c3bea60ddeb4c0363bce1d
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: 9afb0570011a48fce73ac41ee0497759e83f545b
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: 9f01084e2af026019dfa955c8b5f38de22a0845d
@facebook-github-bot

This comment was marked as duplicate.

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: 891c1cc3e1348a4965a068d6fd1375eb584805b9
@facebook-github-bot

This comment was marked as duplicate.

sayanghosh added a commit to sayanghosh/opacus that referenced this pull request Jun 24, 2022
Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Differential Revision: D37128344

fbshipit-source-id: 8753f406a325125a346530e57ef580d4a0abd1c9
@facebook-github-bot

This comment was marked as duplicate.

Summary:
Pull Request resolved: pytorch#440

We are using optimized einsums in place of Pytorch einsums. As per https://optimized-einsum.readthedocs.io/en/stable/ opt einsums are faster and our results on Opacus benchmarking also corroborate it.

Reviewed By: karthikprasad

Differential Revision: D37128344

fbshipit-source-id: 58bd594a349792f9112a356e3112f9bf5ded0c71
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D37128344

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants