-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes to support TNLRV3 fine-tuning #4639
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
added test fixed type mismatch when calling cudnnreduce kernel fixed python frontend to remove redundant states to match pytorch state dict
Tixxx
added
training
issues related to ONNX Runtime training; typically submitted using template
component:training-frontend
labels
Jul 28, 2020
SherlockNoMad
suggested changes
Jul 29, 2020
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR mostly looks good. Please address the comments and I think it's ready to go.
SherlockNoMad
approved these changes
Jul 29, 2020
thiagocrepaldi
pushed a commit
that referenced
this pull request
Aug 31, 2020
#4639 changed the default behavior by removing optimizer state from state_dict/checkpoint APIs. The reason for the previous change was to allow models trained on ORT to be used for inference on PyTorch, which is an important feature. Due to the change aforementioned, when resuming training from a checkpoint, the optimizer would start with random weights, leading to a bad performance. This behavior would also cause reproducibility issues, as the optimizer wouldnt be able to resume from its previous state. This PR adds a boolean flag to state_dict/save_xheckpoint API that when True (default) it saves both model and optimizer state. When False, only the model state is kept.
thiagocrepaldi
pushed a commit
that referenced
this pull request
Sep 1, 2020
#4639 changed the default behavior by removing optimizer state from state_dict/checkpoint APIs. The reason for the previous change was to allow models trained on ORT to be used for inference on PyTorch, which is an important feature. Due to the change aforementioned, when resuming training from a checkpoint, the optimizer would start with random weights, leading to a bad performance. This behavior would also cause reproducibility issues, as the optimizer wouldnt be able to resume from its previous state. This PR adds a boolean flag to state_dict/save_xheckpoint API that when True (default) it saves both model and optimizer state. When False, only the model state is kept.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description: Describe your changes.
Changes to support tnlrv3 fine-tuning task
Motivation and Context
To support tnlrv3 fine-tuning