Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

映射文档 No. 26 #5799

Merged
merged 6 commits into from
Apr 21, 2023
Merged

映射文档 No. 26 #5799

merged 6 commits into from
Apr 21, 2023

Conversation

zeyuxionghust
Copy link
Contributor

完成分组No.26文档映射,其中新提交9个md文件。
torch.sparse.sum因Paddle功能缺失,未添加。

@paddle-bot
Copy link

paddle-bot bot commented Apr 16, 2023

感谢你贡献飞桨文档,文档预览构建中,Docs-New 跑完后即可预览,预览链接:http://preview-pr-5799.paddle-docs-preview.paddlepaddle.org.cn/documentation/docs/zh/api/index_cn.html
预览工具的更多说明,请参考:飞桨文档预览工具

@@ -0,0 +1,21 @@
## [ 仅 paddle 参数更多 ] torch.fft.fftshift
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里我们可以忽略name参数,所以分类写 仅参数名不一致 即可,参数映射部分也可以直接忽略name~

paddle.fft.fftshift(x, axes=None, name=None)
```

Paddle 相比 PyTorch 支持更多其他参数,具体如下:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这句话也可以相应修改一下

| ------------- | ------------ | ------------------------------------------------------ |
| input | x | 输入 Tensor,仅参数名不一致。 |
| dim | axes | 进行移动的轴,仅参数名不一致。 |
| - | name | 网络前缀标识,PyTorch 无此参数,Paddle 保持默认即可。 |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以直接删去name参数

paddle.nn.initializer.calculate_gain(nonlinearity, param=None)
```

两者参数和用法完全一致
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完全一致的辛苦也写一下参数映射部分~

paddle.seed(seed)
```

两者参数和用法完全一致
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完全一致的辛苦也写一下参数映射部分~


### 参数映射

PyTorch | PaddlePaddle | 备注
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的表格格式是不是不太对,每行前面没有 |


### 参数映射

PyTorch | PaddlePaddle | 备注
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

表格格式调整一下

--------| -------------| --------------------------------------------------------------------------------------
input |x | 输入的 Tensor,仅参数名不一致。
dim | axis| 输入的第二个 Tensor,仅参数名不一致。
dtype | - | 指定数据类型,PaddlePaddle 无此功能。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这种需要添加转写示例,通过astype()函数显式指定类型

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以给我个demo吗

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢指出问题。
这个sparse的Tensor可能无法直接用astype,故使用paddle.sparse.cast转化。
有时间麻烦再check一下

|--------| -------------| --------------------------------------------------------------------------------------|
|input |x | 输入的稀疏 Tensor,仅参数名不一致。|
|dim | axis| 指定对输入 SparseTensor 计算 softmax 的轴,Paddle 的默认值:-1。仅参数名不一致。|
|dtype | - | 指定数据类型,可选项,Pytorch 默认值为 None,PaddlePaddle 无此功能。|
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

备注可以加一句,需要进行转写

### 转写示例
#### dytpe:指定数据类型
```Python
# torch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释写成PyTorch写法Paddle写法 这样的吧,保持统一~


# paddle 转化 values 的 dtype 到 float32 数据类型
x = paddle.sparse.cast(x, index_dtype=None, value_dtype='float32')
paddle.sparse.nn.functional.softmax(x,-1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

觉得先计算softmax,再修改type更好一些,因为用户可能不想修改x的type

y = paddle.sparse.nn.functional.softmax(x)
y = paddle.sparse.cast(y, value_dtype='float32')

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可是Pytorch API是这样写的,是before operation,很多dtype都是先转换
dtype– the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

参考torch.sparse.softmax和torch.nn.functional.softmax的dtype参数
torch.sparse.softmax
torch.nn.functional.softmax

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为用户可能不想修改x的type

明白。已修改为y=paddle.sparse.cast(x, value_dtype='float32'),避免x的dtype被修改。
但还是保持了先转换,符合API文档和源码的流程。

# torch
torch.sparse.softmax(x,-1,dtype=torch.float32)

# paddle 转化 values 的 dtype 到 float32 数据类型
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

直接写 Paddle写法 即可~

Copy link
Collaborator

@Skylark-hjyp Skylark-hjyp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhwesky2010
Copy link
Collaborator

@Tomoko-hjf 功能缺失的需要根据模板要求:在最外面的映射表里进行标注即可,不需要写里面的api_difference文档

@zhwesky2010 zhwesky2010 merged commit d60a9ce into PaddlePaddle:develop Apr 21, 2023
@Skylark-hjyp
Copy link
Collaborator

@Tomoko-hjf 功能缺失的需要根据模板要求:在最外面的映射表里进行标注即可,不需要写里面的api_difference文档

好的

@luotao1
Copy link
Collaborator

luotao1 commented Apr 25, 2023

hi, @zeyuxionghust

  • 非常感谢你对飞桨框架的贡献,我们正在运营一个PFCC组织,会通过定期分享技术知识与发布开发者主导任务的形式持续为飞桨框架做贡献,详情可见 https://github.com/luotao1 主页说明。
  • 如果你对PFCC有兴趣,请发送邮件至 ext_paddle_oss@baidu.com,我们会邀请你加入~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants