Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs][en] adjust code example format #44679

Merged
merged 8 commits into from
Aug 4, 2022
Merged
4 changes: 2 additions & 2 deletions python/paddle/distribution/transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -590,7 +590,7 @@ def _codomain(self):
class ExpTransform(Transform):
r"""Exponent transformation with mapping :math:`y = \exp(x)`.

Exapmles:
Examples:

.. code-block:: python

Expand Down Expand Up @@ -1169,7 +1169,7 @@ def _codomain(self):
class TanhTransform(Transform):
r"""Tanh transformation with mapping :math:`y = \tanh(x)`.

Examples
Examples:

.. code-block:: python

Expand Down
6 changes: 3 additions & 3 deletions python/paddle/fluid/dataloader/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ class Subset(Dataset):
Returns:
Dataset: A Dataset which is the subset of the original dataset.

Example code:
Examples:

.. code-block:: python

Expand Down Expand Up @@ -452,10 +452,10 @@ def random_split(dataset, lengths, generator=None):
lengths (sequence): lengths of splits to be produced
generator (Generator, optional): Generator used for the random permutation. Default is None then the DefaultGenerator is used in manual_seed().

Returns:
Returns:
Datasets: A list of subset Datasets, which are the non-overlapping subsets of the original Dataset.

Example code:
Examples:

.. code-block:: python

Expand Down
5 changes: 3 additions & 2 deletions python/paddle/fluid/dygraph/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -485,8 +485,9 @@ def grad(outputs,
inside `inputs`, and the i-th returned Tensor is the sum of gradients of
`outputs` with respect to the i-th `inputs`.

Examples 1:
Examples:
.. code-block:: python
:name: code-example-1

import paddle

Expand Down Expand Up @@ -519,8 +520,8 @@ def test_dygraph_grad(create_graph):
print(test_dygraph_grad(create_graph=False)) # [2.]
print(test_dygraph_grad(create_graph=True)) # [4.]

Examples 2:
.. code-block:: python
:name: code-example-2

import paddle

Expand Down
20 changes: 20 additions & 0 deletions python/paddle/fluid/dygraph/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,26 @@ class Layer(object):

Returns:
None

Examples:
.. code-block:: python

import paddle
class MyLayer(paddle.nn.Layer):
def __init__(self):
super(MyLayer, self).__init__()
self._linear = paddle.nn.Linear(1, 1)
self._dropout = paddle.nn.Dropout(p=0.5)
def forward(self, input):
temp = self._linear(input)
temp = self._dropout(temp)
return temp
x = paddle.randn([10, 1], 'float32')
mylayer = MyLayer()
mylayer.eval() # set mylayer._dropout to eval mode
out = mylayer(x)
mylayer.train() # set mylayer._dropout to train mode
out = mylayer(x)
"""

def __init__(self, name_scope=None, dtype="float32"):
Expand Down
8 changes: 5 additions & 3 deletions python/paddle/fluid/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -1187,8 +1187,9 @@ def run(self,
results are spliced together in dimension 0 for the same Tensor values
(Tensors in fetch_list) on different devices.

Examples 1:
Examples:
.. code-block:: python
:name: code-example-1

import paddle
import numpy
Expand All @@ -1215,9 +1216,10 @@ def run(self,
print(array_val)
# [array([0.02153828], dtype=float32)]

Examples 2:
.. code-block:: python
:name: code-example-2

# required: gpu
import paddle
import numpy as np

Expand Down Expand Up @@ -1265,7 +1267,7 @@ def run(self,
print("The merged prediction shape: {}".format(
np.array(merged_prediction).shape))
print(merged_prediction)

# Out:
# The unmerged prediction shape: (2, 3, 2)
# [array([[-0.37620035, -0.19752218],
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/initializer.py
Original file line number Diff line number Diff line change
Expand Up @@ -1177,7 +1177,7 @@ def calculate_gain(nonlinearity, param=None):

Examples:
.. code-block:: python
:name: code-example1

import paddle
gain = paddle.nn.initializer.calculate_gain('tanh') # 5.0 / 3
gain = paddle.nn.initializer.calculate_gain('leaky_relu', param=1.0) # 1.0 = math.sqrt(2.0 / (1+param^2))
Expand Down
1 change: 0 additions & 1 deletion python/paddle/incubate/autotune.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,6 @@ def set_config(config=None):

Examples:
.. code-block:: python
:name: auto-tuning

import paddle
import json
Expand Down
1 change: 0 additions & 1 deletion python/paddle/nn/functional/activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -601,7 +601,6 @@ def rrelu(x, lower=1. / 8., upper=1. / 3., training=True, name=None):

Examples:
.. code-block:: python
:name: rrelu-example

import paddle
import paddle.nn.functional as F
Expand Down
26 changes: 12 additions & 14 deletions python/paddle/nn/functional/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -2106,7 +2106,7 @@ def cross_entropy(input,

Return the average value of the previous results

.. math::
.. math::
\\loss=\sum_{j}loss_j/N

where, N is the number of samples and C is the number of categories.
Expand All @@ -2115,29 +2115,29 @@ def cross_entropy(input,

1. Hard labels (soft_label = False)

.. math::
.. math::
\\loss=\sum_{j}loss_j/\sum_{j}weight[label_j]

2. Soft labels (soft_label = True)

.. math::
.. math::
\\loss=\sum_{j}loss_j/\sum_{j}\left(\sum_{i}weight[label_i]\right)


Parameters:

- **input** (Tensor)

Input tensor, the data type is float32, float64. Shape is
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .

Note:

1. when use_softmax=True, it expects unscaled logits. This operator should not be used with the
output of softmax operator, which will produce incorrect results.

2. when use_softmax=False, it expects the output of softmax operator.

- **label** (Tensor)

1. If soft_label=False, the shape is
Expand Down Expand Up @@ -2205,10 +2205,11 @@ def cross_entropy(input,
2. if soft_label = True, the dimension of return value is :math:`[N_1, N_2, ..., N_k, 1]` .


Example1(hard labels):
Examples:

.. code-block:: python


# hard labels
import paddle
paddle.seed(99999)
N=100
Expand All @@ -2225,11 +2226,9 @@ def cross_entropy(input,
label)
print(dy_ret.numpy()) #[5.41993642]


Example2(soft labels):

.. code-block:: python


# soft labels
import paddle
paddle.seed(99999)
axis = -1
Expand Down Expand Up @@ -2896,7 +2895,6 @@ def cosine_embedding_loss(input1,

Examples:
.. code-block:: python
:name: code-example1

import paddle

Expand Down
35 changes: 17 additions & 18 deletions python/paddle/nn/functional/pooling.py
Original file line number Diff line number Diff line change
Expand Up @@ -1286,26 +1286,25 @@ def adaptive_avg_pool1d(x, output_size, name=None):
Tensor: The result of 1D adaptive average pooling. Its data type is same as input.
Examples:
.. code-block:: python
:name: adaptive_avg_pool1d-example

# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension
# of input data into m grids averagely and performs poolings in each
# grid to get output.
# adaptive max pool performs calculations as follow:
#
# for i in range(m):
# lstart = floor(i * L / m)
# lend = ceil((i + 1) * L / m)
# output[:, :, i] = sum(input[:, :, lstart: lend])/(lstart - lend)
#
import paddle
import paddle.nn.functional as F
# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension
# of input data into m grids averagely and performs poolings in each
# grid to get output.
# adaptive max pool performs calculations as follow:
#
# for i in range(m):
# lstart = floor(i * L / m)
# lend = ceil((i + 1) * L / m)
# output[:, :, i] = sum(input[:, :, lstart: lend])/(lstart - lend)
#
import paddle
import paddle.nn.functional as F

data = paddle.uniform([1, 3, 32])
pool_out = F.adaptive_avg_pool1d(data, output_size=16)
# pool_out shape: [1, 3, 16])
data = paddle.uniform([1, 3, 32])
pool_out = F.adaptive_avg_pool1d(data, output_size=16)
# pool_out shape: [1, 3, 16])
"""
pool_type = 'avg'
if not in_dynamic_mode():
Expand Down
2 changes: 0 additions & 2 deletions python/paddle/nn/functional/vision.py
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,6 @@ def pixel_unshuffle(x, downscale_factor, data_format="NCHW", name=None):

Examples:
.. code-block:: python
:name: pixel_unshuffle-example

import paddle
import paddle.nn.functional as F
Expand Down Expand Up @@ -423,7 +422,6 @@ def channel_shuffle(x, groups, data_format="NCHW", name=None):

Examples:
.. code-block:: python
:name: channel_shuffle-example

import paddle
import paddle.nn.functional as F
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/nn/initializer/constant.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ class Constant(ConstantInitializer):

Examples:
.. code-block:: python
:name: code-example1

import paddle
import paddle.nn as nn

Expand Down
1 change: 0 additions & 1 deletion python/paddle/nn/initializer/normal.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@ class TruncatedNormal(TruncatedNormalInitializer):

Examples:
.. code-block:: python
:name: initializer_TruncatedNormal-example

import paddle

Expand Down
1 change: 0 additions & 1 deletion python/paddle/nn/initializer/uniform.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ class Uniform(UniformInitializer):

Examples:
.. code-block:: python
:name: initializer_Uniform-example

import paddle

Expand Down
2 changes: 0 additions & 2 deletions python/paddle/nn/initializer/xavier.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ class XavierNormal(XavierInitializer):

Examples:
.. code-block:: python
:name: initializer_XavierNormal-example

import paddle

Expand Down Expand Up @@ -97,7 +96,6 @@ class XavierUniform(XavierInitializer):

Examples:
.. code-block:: python
:name: initializer_XavierUniform-example

import paddle

Expand Down
1 change: 0 additions & 1 deletion python/paddle/nn/layer/activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,6 @@ class RReLU(Layer):

Examples:
.. code-block:: python
:name: RReLU-example

import paddle

Expand Down
16 changes: 7 additions & 9 deletions python/paddle/nn/layer/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -295,15 +295,15 @@ class CrossEntropyLoss(Layer):
- **input** (Tensor)

Input tensor, the data type is float32, float64. Shape is
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .
:math:`[N_1, N_2, ..., N_k, C]`, where C is number of classes , ``k >= 1`` .

Note:

1. when use_softmax=True, it expects unscaled logits. This operator should not be used with the
output of softmax operator, which will produce incorrect results.

2. when use_softmax=False, it expects the output of softmax operator.


- **label** (Tensor)

Expand All @@ -313,7 +313,7 @@ class CrossEntropyLoss(Layer):

2. If soft_label=True, the shape and data type should be same with ``input`` ,
and the sum of the labels for each sample should be 1.

- **output** (Tensor)

Return the softmax cross_entropy loss of ``input`` and ``label``.
Expand All @@ -328,10 +328,11 @@ class CrossEntropyLoss(Layer):

2. if soft_label = True, the dimension of return value is :math:`[N_1, N_2, ..., N_k, 1]` .

Example1(hard labels):
Examples:

.. code-block:: python

# hard labels
import paddle
paddle.seed(99999)
N=100
Expand All @@ -348,11 +349,9 @@ class CrossEntropyLoss(Layer):
label)
print(dy_ret.numpy()) #[5.41993642]


Example2(soft labels):

.. code-block:: python


# soft labels
import paddle
paddle.seed(99999)
axis = -1
Expand Down Expand Up @@ -1435,7 +1434,6 @@ class CosineEmbeddingLoss(Layer):

Examples:
.. code-block:: python
:name: code-example1

import paddle

Expand Down
Loading