-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix RNN-T loss memory usage #11144
Fix RNN-T loss memory usage #11144
Conversation
Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
Signed-off-by: artbataev <artbataev@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great discovery of the memory usage issue and a very clean fix! Approved and thanks.
[🤖]: Hi @artbataev 👋, We wanted to let you know that a CICD pipeline for this PR just finished successfully So it might be time to merge this PR or get some approvals I'm just a bot so I'll leave it you what to do next. //cc @pablo-garay @ko3n1g |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix !
* Fix RNN-T memory usage Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: Vladimir Bataev <vbataev@nvidia.com> Signed-off-by: Hainan Xu <hainanx@nvidia.com>
* Fix RNN-T memory usage Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
* Fix RNN-T memory usage Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
* Fix RNN-T memory usage Signed-off-by: artbataev <artbataev@users.noreply.github.com> --------- Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
What does this PR do ?
Fixes memory usage for Numba-based implementation of RNN-T and Multi-blank Transducer losses.
The current implementation requires 3x memory compared to the size of logits (logits, gradient, extra memory = size of logits). This PR fixes memory usage to the minimal possible 2x (logits, gradient).
It looks like assigning tensors directly to
ctx
instead of saving them throughsave_for_backward
breaks PyTorch logic, and it copies the gradient tensor (which results in extra memory usage).TDT loss was not affected by this issue (I'm unsure why, but it requires contiguous tensor for labels-related logits).
Before (main):
After (this PR):
Code to check memory usage (size of tensors besides logits is negligible compared to logits):
Collection: [ASR]
Changelog
ctx.save_for_backward(...)
instead of directly assigning tensors according to PyTorch documentation.Usage
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information