-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update dygraph auto_parallel en API docs. #59557
Conversation
ReduceType.kRedSum | ||
ReduceType.kRedMax | ||
ReduceType.kRedMin | ||
ReduceType.kRedProd | ||
ReduceType.kRedAvg | ||
ReduceType.kRedAny | ||
ReduceType.kRedAll |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ReduceType.kRedSum | |
ReduceType.kRedMax | |
ReduceType.kRedMin | |
ReduceType.kRedProd | |
ReduceType.kRedAvg | |
ReduceType.kRedAny | |
ReduceType.kRedAll | |
- ReduceType.kRedSum | |
- ReduceType.kRedMax | |
- ReduceType.kRedMin | |
- ReduceType.kRedProd | |
- ReduceType.kRedAvg | |
- ReduceType.kRedAny | |
- ReduceType.kRedAll |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
Examples: | ||
.. code-block:: python | ||
|
||
>>> import paddle | ||
>>> import paddle.distributed as dist | ||
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | ||
>>> a = paddle.ones([10, 20]) | ||
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | ||
>>> # distributed tensor | ||
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial(dist.ReduceType.kRedSum)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Examples
、.. code-block:: python
和代码片段
层层都得保持缩进,参考 API文档英文模板 ,不然网页渲染会不正常。
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | |
>>> a = paddle.ones([10, 20]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial(dist.ReduceType.kRedSum)]) | |
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | |
>>> a = paddle.ones([10, 20]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial(dist.ReduceType.kRedSum)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
Examples: | ||
.. code-block:: python | ||
|
||
>>> import paddle.distributed as dist | ||
>>> placements = [dist.Replicate(), dist.Shard(0), dist.Partial()] | ||
>>> for p in placements: | ||
>>> if isinstance(p, dist.Placement): | ||
>>> if p.is_replicated(): | ||
>>> print("replicate.") | ||
>>> elif p.is_shard(): | ||
>>> print("shard.") | ||
>>> elif p.is_partial(): | ||
>>> print("partial.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同理,需要缩进
Examples: | |
.. code-block:: python | |
>>> import paddle.distributed as dist | |
>>> placements = [dist.Replicate(), dist.Shard(0), dist.Partial()] | |
>>> for p in placements: | |
>>> if isinstance(p, dist.Placement): | |
>>> if p.is_replicated(): | |
>>> print("replicate.") | |
>>> elif p.is_shard(): | |
>>> print("shard.") | |
>>> elif p.is_partial(): | |
>>> print("partial.") | |
Examples: | |
.. code-block:: python | |
>>> import paddle.distributed as dist | |
>>> placements = [dist.Replicate(), dist.Shard(0), dist.Partial()] | |
>>> for p in placements: | |
>>> if isinstance(p, dist.Placement): | |
>>> if p.is_replicated(): | |
>>> print("replicate.") | |
>>> elif p.is_shard(): | |
>>> print("shard.") | |
>>> elif p.is_partial(): | |
>>> print("partial.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
Examples: | ||
.. code-block:: python | ||
|
||
>>> import paddle | ||
>>> import paddle.distributed as dist | ||
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y']) | ||
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]]) | ||
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | ||
>>> # distributed tensor | ||
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y']) | |
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)]) | |
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([[2, 4, 5], [0, 1, 3]], dim_names=['x', 'y']) | |
>>> a = paddle.to_tensor([[1,2,3],[5,6,7]]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Shard(0), dist.Shard(1)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
Examples: | ||
.. code-block:: python | ||
|
||
>>> import paddle | ||
>>> import paddle.distributed as dist | ||
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | ||
>>> a = paddle.ones([10, 20]) | ||
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | ||
>>> # distributed tensor | ||
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Replicate()]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | |
>>> a = paddle.ones([10, 20]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Replicate()]) | |
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | |
>>> a = paddle.ones([10, 20]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Replicate()]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
Examples: | ||
.. code-block:: python | ||
|
||
>>> import paddle | ||
>>> import paddle.distributed as dist | ||
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | ||
>>> a = paddle.ones([10, 20]) | ||
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | ||
>>> # distributed tensor | ||
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | |
>>> a = paddle.ones([10, 20]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()]) | |
Examples: | |
.. code-block:: python | |
>>> import paddle | |
>>> import paddle.distributed as dist | |
>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"]) | |
>>> a = paddle.ones([10, 20]) | |
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | |
>>> # distributed tensor | |
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
a49397a
to
52c30de
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM,同步提供下中文文档~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Others
PR changes
Docs
Description
update dygraph auto_parallel en API docs.
Pcard-73145