Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip updating not trainable parameters in distribute transpiler #10049

Merged

Conversation

typhoonzero
Copy link
Contributor

Fix #10014

@typhoonzero typhoonzero requested review from jacquesqiao and Yancey1989 and removed request for jacquesqiao April 19, 2018 05:42
@@ -222,8 +222,14 @@ def transpile(self,

# step1: For large parameters and gradients, split them into smaller
# blocks.
param_list = [pg[0] for pg in params_grads]
grad_list = [pg[1] for pg in params_grads]
param_list = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can use filter for the shorter code:

param_list = filter(lambda x[0] : type(x[0]) == parameter and x[0].trainable == True , params_grads)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this way actually run 2 for loops and current implementation just run 1for loop

Copy link
Contributor

@Yancey1989 Yancey1989 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@typhoonzero typhoonzero merged commit 879b7c5 into PaddlePaddle:develop Apr 19, 2018
@typhoonzero typhoonzero deleted the fix_not_trainable_transpiler branch April 19, 2018 11:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants