Skip to content

Commit

Permalink
[Docathon] Fix NO.6 NO.21 API Label (PaddlePaddle#57512)
Browse files Browse the repository at this point in the history
  • Loading branch information
Sekiro-x authored Sep 20, 2023
1 parent e091e83 commit 95da913
Show file tree
Hide file tree
Showing 14 changed files with 19 additions and 19 deletions.
2 changes: 1 addition & 1 deletion python/paddle/incubate/optimizer/lars_momentum.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class LarsMomentumOptimizer(Optimizer):
The default value is None in static graph mode, at this time all parameters will be updated.
regularization (WeightDecayRegularizer, optional): The strategy of regularization. There are two method: \
:ref:`api_base_regularizer_L1Decay` , :ref:`api_base_regularizer_L2Decay` . If a parameter has set \
regularizer using :ref:`api_base_ParamAttr` already, the regularization setting here in optimizer will be \
regularizer using :ref:`api_paddle_ParamAttr` already, the regularization setting here in optimizer will be \
ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional): Gradient cliping strategy, it's an instance of
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/incubate/optimizer/lbfgs.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ class LBFGS(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already, \
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
Expand Down
6 changes: 3 additions & 3 deletions python/paddle/nn/layer/rnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -740,7 +740,7 @@ class SimpleRNNCell(RNNCellBase):
- **states** (Tensor): shape `[batch_size, hidden_size]`, the new hidden state, corresponding to :math:`h_{t}` in the formula.
Notes:
All the weights and bias are initialized with `Uniform(-std, std)` by default. Where std = :math:`\frac{1}{\sqrt{hidden\_size}}`. For more information about parameter initialization, please refer to :ref:`api_base_ParamAttr`.
All the weights and bias are initialized with `Uniform(-std, std)` by default. Where std = :math:`\frac{1}{\sqrt{hidden\_size}}`. For more information about parameter initialization, please refer to :ref:`api_paddle_ParamAttr`.
Examples:
Expand Down Expand Up @@ -893,7 +893,7 @@ class LSTMCell(RNNCellBase):
Notes:
All the weights and bias are initialized with `Uniform(-std, std)` by
default. Where std = :math:`\frac{1}{\sqrt{hidden\_size}}`. For more
information about parameter initialization, please refer to :ref:`api_base_ParamAttr`.
information about parameter initialization, please refer to :ref:`api_paddle_ParamAttr`.
Examples:
Expand Down Expand Up @@ -1054,7 +1054,7 @@ class GRUCell(RNNCellBase):
Notes:
All the weights and bias are initialized with `Uniform(-std, std)` by
default. Where std = :math:`\frac{1}{\sqrt{hidden\_size}}`. For more
information about parameter initialization, please refer to s:ref:`api_base_ParamAttr`.
information about parameter initialization, please refer to s:ref:`api_paddle_ParamAttr`.
Examples:
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/adadelta.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ class Adadelta(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already, \
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/adam.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ class Adam(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It canbe a float value as coeff of L2 regularization or
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already,
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/adamax.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ class Adamax(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It can be a float value as coeff of L2 regularization or
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already,
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/lbfgs.py
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ class LBFGS(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already, \
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/momentum.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class Momentum(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It can be a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already, \
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ class Optimizer:
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It canbe a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already, \
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/rmsprop.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ class RMSProp(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization.
It can be a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already,
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already,
the regularization setting here in optimizer will be ignored for this parameter.
Otherwise, the regularization setting here in optimizer will take effect.
Default None, meaning there is no regularization.
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/optimizer/sgd.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class SGD(Optimizer):
weight_decay (float|WeightDecayRegularizer, optional): The strategy of regularization. \
It can be a float value as coeff of L2 regularization or \
:ref:`api_base_regularizer_L1Decay`, :ref:`api_base_regularizer_L2Decay`.
If a parameter has set regularizer using :ref:`api_base_ParamAttr` already, \
If a parameter has set regularizer using :ref:`api_paddle_ParamAttr` already, \
the regularization setting here in optimizer will be ignored for this parameter. \
Otherwise, the regularization setting here in optimizer will take effect. \
Default None, meaning there is no regularization.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/static/nn/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -2566,10 +2566,10 @@ def bilinear_tensor_product(
:ref:`api_guide_Name` . Usually name is no need to set and None by default.
param_attr (ParamAttr|None): To specify the weight parameter attribute.
Default: None, which means the default weight parameter property is
used. See usage for details in :ref:`api_base_ParamAttr` .
used. See usage for details in :ref:`api_paddle_ParamAttr` .
bias_attr (ParamAttr|None): To specify the bias parameter attribute.
Default: None, which means the default bias parameter property is
used. See usage for details in :ref:`api_base_ParamAttr` .
used. See usage for details in :ref:`api_paddle_ParamAttr` .
Returns:
Tensor, A 2-D Tensor of shape [batch_size, size]. Data type is the same as input **x**.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/static/nn/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,10 @@ def nce(
sample is 1.0.
param_attr (ParamAttr|None): To specify the weight parameter attribute.
Default: None, which means the default weight parameter property is
used. See usage for details in :ref:`api_base_ParamAttr` .
used. See usage for details in :ref:`api_paddle_ParamAttr` .
bias_attr (ParamAttr|None): To specify the bias parameter attribute.
Default: None, which means the default bias parameter property is
used. See usage for details in :ref:`api_base_ParamAttr` .
used. See usage for details in :ref:`api_paddle_ParamAttr` .
num_neg_samples (int): ${num_neg_samples_comment}.
name(str|None): For detailed information, please refer to
:ref:`api_guide_Name` . Usually name is no need to set and None by default.
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/static/nn/sequence_lod.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,9 +108,9 @@ def sequence_conv(
on both sides of the sequence. If set 0, the length of :math:`filter\_size - 1` data
is padded at the end of each input sequence. Default: None.
bias_attr (ParamAttr): To specify the bias parameter property. Default: None, which means the
default bias parameter property is used. See usage for details in :ref:`api_base_ParamAttr` .
default bias parameter property is used. See usage for details in :ref:`api_paddle_ParamAttr` .
param_attr (ParamAttr): To specify the weight parameter property. Default: None, which means the
default weight parameter property is used. See usage for details in :ref:`api_base_ParamAttr` .
default weight parameter property is used. See usage for details in :ref:`api_paddle_ParamAttr` .
act (str): Activation to be applied to the output of this layer, such as tanh, softmax,
sigmoid, relu. For more information, please refer to :ref:`api_guide_activations_en` . Default: None.
name (str, optional): The default value is None. Normally there is no need for user to set this property.
Expand Down

0 comments on commit 95da913

Please sign in to comment.