-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better validator for matching parameters #489
Conversation
@ffuuugor has updated the pull request. You must reimport the pull request before landing. |
@facebook-github-bot has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
Thank you for the detailed description!
opacus/tests/privacy_engine_test.py
Outdated
): | ||
model = self._init_model() | ||
model = PrivacyEngine.get_compatible_module(model) | ||
optimizer = torch.optim.SGD(model.parameters(), lr=self.LR, momentum=0) | ||
optimizer = torch.optim.SGD( | ||
[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest something like this for readability
model.parameters()
if not opt_exclude_frozen
else [p for p in model.parameters() if p.requires_grad]
Same in the _init_vanilla_training
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clean and quick fix!
@ffuuugor has updated the pull request. You must reimport the pull request before landing. |
@facebook-github-bot has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@ffuuugor has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
This PR resolves #477 and addresses various issues regarding parameter validation
Background
As identified in #432, we want to warn the user if optimizer parameters are different from model parameters (e.g. because the module has been replaced by the ModuleValidator). Optimizer expects the weights to have
.grad_sample
attribute populated by theGradSampleModule
, so there's no sense in having optimizer with different set of params to the input modelIssue 1: Comparing values
One problem with how we do it currently is that we're looking for the wrong thing. We use
torch.eq
, which performs elementwise comparison and will output True for identical tensors. The problem is - we don't need identical, we need the exact same object - so that optimizer can read per-sample gradients computed by the GradSampleModule.Issue 2: Frozen layers
When freezing some model layers, people are free to choose how to initialize their optimizer: with all
model.parameters()
or only those which are not frozen - it doesn't have any material difference on the training process.However, if they do the latter, opacus will complain that model and optimizer weights are mismatched.
This PR
In this PR we address both issues by:
in
operator, which checks object and not its value