Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

转换规则 No.358/361 #256

Merged
merged 2 commits into from
Sep 4, 2023
Merged

转换规则 No.358/361 #256

merged 2 commits into from
Sep 4, 2023

Conversation

co63oc
Copy link
Contributor

@co63oc co63oc commented Aug 26, 2023

PR Docs

#112

358 torch.linalg.lu_factor
361 torch.linalg.lu_factor_ex 返回info的shape不一致,修改为info=info.item()比较, check_errors参数映射为增加assert验证

映射文档 PaddlePaddle/docs#6139

PR APIs

@paddle-bot
Copy link

paddle-bot bot commented Aug 26, 2023

Thanks for your contribution!

@paddle-bot paddle-bot bot added the contributor External developers label Aug 26, 2023
@luotao1 luotao1 added the HappyOpenSource 快乐开源活动issue与PR label Aug 28, 2023
def generate_code(self, kwargs):
out_v = kwargs.pop("out") if "out" in kwargs else None
check_errors_v = (
kwargs.pop("check_errors") if "check_errors" in kwargs else None
Copy link
Collaborator

@zhwesky2010 zhwesky2010 Aug 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感觉可以不转这种 报错检查 之类的功能,对输出结果无影响,这样可以减少一行转换后的行数

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已取消 check_errors

x = torch.tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=torch.float64)
out = (torch.tensor([], dtype=torch.float64), torch.tensor([], dtype=torch.int), torch.tensor([], dtype=torch.int))
LU, pivots, info = torch.linalg.lu_factor_ex(A=x, pivot=True, check_errors=True, out=out)
info = info.item()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个info是有一些输出的不同吗?比如paddle是Tensor,torch是python scalar?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shape不一样,paddle是[1],torch是整数值

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shape不一样,paddle是[1],torch是整数值

这个地方那也需要转写一下,在转写里第三个参数里改成 info.item()

@@ -4061,6 +4061,60 @@ def generate_code(self, kwargs):
return code


class LinalgLufactorMatcher(BaseMatcher):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个应该可以复用 TupleAssignMatcher

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

return code


class LinalgLufactorexMatcher(BaseMatcher):
Copy link
Collaborator

@zhwesky2010 zhwesky2010 Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以改成一个通用的 TripleAssignMatcher,配置:

"check_errors": ""

这样代码能最大化的保持复用,方便日后维护更新

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

@zhwesky2010
Copy link
Collaborator

还需要处理下冲突

@co63oc
Copy link
Contributor Author

co63oc commented Aug 29, 2023

还需要处理下冲突

已修改

x = torch.tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=torch.float64)
out = (torch.tensor([], dtype=torch.float64), torch.tensor([], dtype=torch.int), torch.tensor([], dtype=torch.int))
LU, pivots, info = torch.linalg.lu_factor_ex(A=x, pivot=True, check_errors=True, out=out)
info = info.item()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shape不一样,paddle是[1],torch是整数值

这个地方那也需要转写一下,在转写里第三个参数里改成 info.item()

@zhwesky2010
Copy link
Collaborator

@co63oc 我先合入了,这个info.item()在下个PR里改下吧,可以重新再弄个Matcher,保留目前的 TripleAssignMatcher

@zhwesky2010 zhwesky2010 merged commit 6d13662 into PaddlePaddle:master Sep 4, 2023
@co63oc co63oc mentioned this pull request Sep 4, 2023
@co63oc
Copy link
Contributor Author

co63oc commented Sep 4, 2023

@co63oc 我先合入了,这个info.item()在下个PR里改下吧,可以重新再弄个Matcher,保留目前的 TripleAssignMatcher

修改 PR #268

@co63oc co63oc deleted the api358 branch September 7, 2023 06:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource 快乐开源活动issue与PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants