Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MKLDNN layout: Support for pool operator #11101

Merged

Conversation

mozga-intel
Copy link
Contributor

@mozga-intel mozga-intel commented May 31, 2018

Waiting for supports of the mkldnn’s layout.

Please have a look at this pool operator supported by the MKLDNN’s layout and assess if we can keep this design of the code.

This code uses the implementation of layout which is available in the last pull-request. Therefore some of the function in this code are not available in this pull-request, but are available here

This version of code can be merged into the main branch when the pull-request with layout is accepted.

The concept of splits of the long code was proposed by @luotao1

Pull-request is related to #11040

@mozga-intel mozga-intel force-pushed the mozga-intel/Pool_mkldnn_layout branch from c6bf91f to 36031cb Compare June 10, 2018 21:10
@luotao1 luotao1 requested a review from tensor-tang June 11, 2018 07:29
@@ -94,16 +103,17 @@ class PoolMKLDNNOpKernel : public paddle::framework::OpKernel<T> {
auto pool_p =
std::static_pointer_cast<pooling_forward>(dev_ctx.GetBlob(key_pool_p));
if (pool_p == nullptr) {
// TODO(pzelazko-intel): support more formats
auto src_md = platform::MKLDNNMemDesc(
src_tz, platform::MKLDNNGetDataType<T>(), input_format);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input->format().
input_format is only used once.

@@ -267,35 +302,74 @@ class PoolMKLDNNGradOpKernel : public paddle::framework::OpKernel<T> {
auto pool_bwd_pd = mkldnn::pooling_backward::primitive_desc(
pool_bwd_desc, mkldnn_engine, *pool_pd);

// reorder between user_diff_dst and pool diff_dst if needed
diff_dst_memory = std::make_shared<memory>(user_diff_dst_memory);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We recommend unique_ptr, maybe next time you can change all of them.

Copy link
Contributor

@tensor-tang tensor-tang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

The comments can be addressed next time.

@tensor-tang tensor-tang merged commit b3fd9da into PaddlePaddle:develop Jun 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants