Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IR] IR attribute printer and support mutable attribute #54369

Merged
merged 60 commits into from
Jun 8, 2023

Conversation

kangguangli
Copy link
Contributor

@kangguangli kangguangli commented Jun 5, 2023

PR types

New features

PR changes

Others

Description

Current printer result is like:

(%902) = "pd.scale" (%900, %901){bias_after_scale:1,bias:0} : (tensor<f32>, tensor<1xf32>) -> tensor<f32>
(%903) = "pd.full" (){place:Place(cpu),dtype:float32,value:-1,shape:IntArray<1 >} : () -> tensor<1xf32>
(%904) = "pd.scale" (%902, %903){bias_after_scale:1,bias:0} : (tensor<f32>, tensor<1xf32>) -> tensor<f32>
(%905) = "pd.full" (){place:Place(cpu),dtype:int16,value:1,shape:IntArray<>} : () -> tensor<f32>
(%906) = "pd.mean_grad" (%879, %905){reduce_all:1,keepdim:0,axis:IntArray<>} : (tensor<-1x1xf32>, tensor<f32>) -> tensor<-1x1xf32>
(%907) = "pd.cross_entropy_with_softmax_grad" (%876, %878, %906){axis:-1,ignore_index:-100,numeric_stable_mode:1,use_softmax:1,soft_label:0} : (tensor<-1x1xi64>, tensor<-1x1000xf32>, tensor<-1x1xf32>) -> tensor<-1x1000xf32>
(%908, %909) = "pd.add_grad" (%873, %425, %907){axis:-1} : (tensor<-1x1000xf32>, tensor<1000xf32>, tensor<-1x1000xf32>) -> tensor<-1x1000xf32>, tensor<1000xf32>
(%910, %911) = "pd.matmul_grad" (%871, %427, %908){transpose_y:0,transpose_x:0} : (tensor<-1x2048xf32>, tensor<2048x1000xf32>, tensor<-1x1000xf32>) -> tensor<-1x2048xf32>, tensor<2048x1000xf32>
(%912) = "pd.flatten_grad" (%872, %910){} : (tensor<0x-1x2048x1x1xf32>, tensor<-1x2048xf32>) -> tensor<-1x2048x1x1xf32>
(%913) = "pd.pool2d_grad" (%868, %870, %912){padding_algorithm:EXPLICIT,adaptive:1,global_pooling:0,data_format:NCHW,exclusive:1,ceil_mode:0,paddings:array<0, 0>,pooling_type:avg,strides:array<1, 1>,kernel_size:IntArray<1 1 >} : (tensor<-1x2048x7x7xf32>, tensor<-1x2048x1x1xf32>, tensor<-1x2048x1x1xf32>) -> tensor<-1x2048x7x7xf32>
(%914) = "pd.relu_grad" (%868, %913){} : (tensor<-1x2048x7x7xf32>, tensor<-1x2048x7x7xf32>) -> tensor<-1x2048x7x7xf32>

Others

Pcard-67164

kangguangli and others added 30 commits May 23, 2023 02:51
@paddle-bot
Copy link

paddle-bot bot commented Jun 5, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@kangguangli kangguangli force-pushed the ir_attribute_printer branch from ecfa280 to ed21d27 Compare June 7, 2023 06:35
@@ -130,8 +132,12 @@ class Dialect {
return *interface;
}

virtual void PrintType(ir::Type type, std::ostream &os) {
throw std::logic_error("dialect has no registered type printing hook");
virtual void PrintType(ir::Type type, std::ostream &os) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同在ir命名空间下,就没有显示描述ir::了吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的,这里我改一下


protected:
if (auto s = attr.dyn_cast<ir::StrAttribute>()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

类似,同命名空间没必要写ir::

@kangguangli kangguangli force-pushed the ir_attribute_printer branch from 4eaffd8 to d731403 Compare June 7, 2023 11:06
@kangguangli kangguangli merged commit 51ca74b into PaddlePaddle:develop Jun 8, 2023
@kangguangli kangguangli deleted the ir_attribute_printer branch June 8, 2023 06:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants