Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Paddle Inference] Predictor support paddle::Tensor #50445

Merged
merged 9 commits into from
Apr 11, 2023

Conversation

yuanlehome
Copy link
Contributor

@yuanlehome yuanlehome commented Feb 13, 2023

PR types

New features

PR changes

APIs

Describe

  1. Predictor新增Run接口,支持paddle::Tensor输入输出;
  2. jit layer predictor engine适配新的Run接口,同时移除旧Tensor的转换逻辑;
  3. tensor.h依赖的complex.h中部分特性实现需要-std=c++14支持,因此升级inference lib编译对c++标准的最低需求为c++14;
  4. refine部分代码,补充单测代码

docs ref to 链接.

TODO:

  • 补充c/go接口实现
  • 更新c/go api文档

NOTES:

@paddle-bot
Copy link

paddle-bot bot commented Feb 13, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@yuanlehome yuanlehome marked this pull request as draft February 13, 2023 07:31
@yuanlehome yuanlehome marked this pull request as ready for review February 17, 2023 11:19
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch 2 times, most recently from b511447 to b3796f3 Compare March 1, 2023 12:30
@yuanlehome yuanlehome marked this pull request as draft March 2, 2023 01:38
@yuanlehome yuanlehome changed the title [Paddle Inference] support phi tensor [Paddle Inference] support paddle::Tensor Mar 2, 2023
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch 2 times, most recently from fd31565 to c28e71b Compare March 9, 2023 06:09
@yuanlehome yuanlehome marked this pull request as ready for review March 9, 2023 06:10
@yuanlehome yuanlehome changed the title [Paddle Inference] support paddle::Tensor [Paddle Inference] Predictor support paddle::Tensor Mar 9, 2023
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch 8 times, most recently from 8e0b229 to a293262 Compare March 15, 2023 14:16
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch 4 times, most recently from abcfadd to 335898b Compare March 21, 2023 05:41
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch from 335898b to d6997ff Compare March 30, 2023 07:43
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch from 16a7db5 to 0736089 Compare March 30, 2023 11:22
@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch from 0736089 to 6d06b4e Compare March 30, 2023 12:28
zhwesky2010
zhwesky2010 previously approved these changes Mar 31, 2023
Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for ut rename

@yuanlehome yuanlehome force-pushed the phi_tensor_for_inference branch from 2151a34 to 2296742 Compare March 31, 2023 07:35
LOG(ERROR) << "fail to get fetches";
return false;
}
VLOG(3) << "predict cost: " << timer.toc() << "ms";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

异步情况下不加sync耗时统计有问题吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

异步情况下不加sync耗时统计有问题吧

对,需要加同步,准备放在getfetch内部

Copy link
Contributor Author

@yuanlehome yuanlehome Apr 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

异步情况下不加sync耗时统计有问题吧

讨论后不加同步了。paddle::Tensor有copy_to(c++)或to_tensor接口做这样的工作~

@yuanlehome yuanlehome closed this Apr 10, 2023
@yuanlehome yuanlehome reopened this Apr 10, 2023
@@ -1037,6 +1048,73 @@ bool AnalysisPredictor::Run(const std::vector<PaddleTensor> &inputs,
return true;
}

bool AnalysisPredictor::Run(const std::vector<paddle::Tensor> &inputs,
std::vector<paddle::Tensor> *outputs) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

和上面Run接口,重复的实现内容能否被复用?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

和上面Run接口,重复的实现内容能否被复用?

想过抽出公共代码出来,但感觉抽出来的代码没有清晰的作用

@@ -83,7 +83,7 @@ else()
if(WITH_MKL)
set(FLAG_OPENMP "-fopenmp")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 ${FLAG_OPENMP}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 ${FLAG_OPENMP}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里升级对其他是否有影响?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里升级对其他是否有影响?

会影响到paddle-inference-demo中的cmakelists文件设置,已经提pr改过了。

Copy link
Contributor

@jiweibo jiweibo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link

@leiqing1 leiqing1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

主要review了对应了API文档。目前却Go和C的文档

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants