-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Inference] Save optimized model by pass #53696
[Inference] Save optimized model by pass #53696
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
6d2aae8
to
d0b9a50
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
63c4a8e
to
9d43ee1
Compare
paddle/fluid/inference/analysis/passes/save_optimized_model_pass.cc
Outdated
Show resolved
Hide resolved
paddle/fluid/inference/analysis/passes/save_optimized_model_pass.h
Outdated
Show resolved
Hide resolved
… support_inference_save_ir_params_xpu
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
Others
Description
文档更新链接:PaddlePaddle/Paddle-Inference-Demo#446
支持保存经过IR PASS优化后的模型,且后续能够正常加载推理。
使用方法:
推理优化模型时,需关闭IR优化,避免再次执行IR优化: