Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spelling check pt2 on docstrings in opacus folder #486

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions opacus/accountants/accountant.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def get_optimizer_hook_fn(
"""
Returns a callback function which can be used to attach to DPOptimizer
Args:
sample_rate: Expected samping rate used for accounting
sample_rate: Expected sampling rate used for accounting
"""

def hook_fn(optim: DPOptimizer):
Expand All @@ -88,7 +88,7 @@ def hook_fn(optim: DPOptimizer):

def state_dict(self, destination: T_state_dict = None) -> T_state_dict:
"""
Retruns a dictionary containing the state of the accountant.
Returns a dictionary containing the state of the accountant.
Args:
destination: a mappable object to populate the current state_dict into.
If this arg is None, an OrderedDict is created and populated.
Expand Down
2 changes: 1 addition & 1 deletion opacus/optimizers/ddp_perlayeroptimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def __init__(
class DistributedPerLayerOptimizer(DPOptimizer):
"""
:class:`~opacus.optimizers.optimizer.DPOptimizer` that implements
per layer clipping strategy and is compatible with distibured data parallel
per layer clipping strategy and is compatible with distributed data parallel
"""

def __init__(
Expand Down
4 changes: 2 additions & 2 deletions opacus/optimizers/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ def _generate_noise(
reference: The reference Tensor to get the appropriate shape and device
for generating the noise
generator: The PyTorch noise generator
secure_mode: boolean showing if "secure" noise need to be generate
secure_mode: boolean showing if "secure" noise need to be generated
(see the notes)

Notes:
Expand Down Expand Up @@ -186,7 +186,7 @@ class DPOptimizer(Optimizer):
Examples:
>>> module = MyCustomModel()
>>> optimizer = torch.optim.SGD(module.parameters(), lr=0.1)
>>> dp_optimzer = DPOptimizer(
>>> dp_optimizer = DPOptimizer(
... optimizer=optimizer,
... noise_multiplier=1.0,
... max_grad_norm=1.0,
Expand Down
2 changes: 1 addition & 1 deletion opacus/utils/module_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def requires_grad(module: nn.Module, *, recurse: bool = False) -> bool:
Args:
module: PyTorch module whose parameters are to be examined.
recurse: Flag specifying if the gradient requirement check should
be applied recursively to sub-modules of the specified module
be applied recursively to submodules of the specified module

Returns:
Flag indicate if any parameters require gradients
Expand Down
6 changes: 3 additions & 3 deletions opacus/utils/tensor_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def calc_sample_norms(
Calculates the norm of the given tensors for each sample.

This function calculates the overall norm of the given tensors for each sample,
assuming the each batch's dim is zero.
assuming each batch's dim is zero.

Args:
named_params: An iterator of tuples <name, param> with name being a
Expand Down Expand Up @@ -61,7 +61,7 @@ def calc_sample_norms_one_layer(param: torch.Tensor) -> torch.Tensor:
Calculates the norm of the given tensor (a single parameter) for each sample.

This function calculates the overall norm of the given tensor for each sample,
assuming the each batch's dim is zero.
assuming each batch's dim is zero.

It is equivalent to:
`calc_sample_norms(named_params=((None, param),))[0]`
Expand Down Expand Up @@ -90,7 +90,7 @@ def sum_over_all_but_batch_and_last_n(
Calculates the sum over all dimensions, except the first
(batch dimension), and excluding the last n_dims.

This function will ignore the first dimension and it will
This function will ignore the first dimension, and it will
not aggregate over the last n_dims dimensions.

Args:
Expand Down
2 changes: 1 addition & 1 deletion opacus/utils/uniform_sampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ class DistributedUniformWithReplacementSampler(Sampler):
(plus or minus one sample)
3. Each replica selects each sample of its chunk independently
with probability `sample_rate`
4. Each replica ouputs the selected samples, which form a local batch
4. Each replica outputs the selected samples, which form a local batch

The sum of the lengths of the local batches follows a Poisson distribution.
In particular, the expected length of each local batch is:
Expand Down