Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oneDNN NHWC fixes #40049

Merged

Conversation

jczaja
Copy link
Contributor

@jczaja jczaja commented Mar 1, 2022

PR types

Bug fixes

PR changes

Others

Describe

oneDNN NHWC implementation need to know upfront if model is NHWC or NCHW so mechanics was added to check that.
Also relevant UT was added from #39654

@paddle-bot-old
Copy link

paddle-bot-old bot commented Mar 1, 2022

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja added the Intel label Mar 1, 2022
@jczaja jczaja force-pushed the prv-nhwc-yet-another-implementation branch from 1763e97 to 70869b8 Compare March 1, 2022 17:13
@jczaja jczaja changed the title [TESTING] oneDNN NHWC fixes oneDNN NHWC fixes Mar 3, 2022
@jczaja jczaja force-pushed the prv-nhwc-yet-another-implementation branch from 1db6f35 to e576182 Compare March 4, 2022 09:58
@jczaja jczaja force-pushed the prv-nhwc-yet-another-implementation branch from e576182 to 72fd182 Compare March 4, 2022 13:36
- fix

- compilation fixes

- fix

- fixe

- fix

- fix

- compilation fix

- comment fix

- lint

update mkldnn conv_elementwise_add_fuse_pass ut

- NHWC changes to prelu

- alhpa dims

- UT fix

- fix to UT

- lint

- Some fixes

- added to BWD of prelu NHWC support

- reverted removal of resetting cu_layout in clearing of caching
@jczaja
Copy link
Contributor Author

jczaja commented Mar 4, 2022

@baoachun HI, This PR fixes #38126 (pointed UTs and model) and incorporates UT you have created. The only problem is that in CI. test_lrn_op is failing (time out) . test_lrn_op is no related to oneDNN so perhaps you can advice where problem is?

@lidanqing-intel
Copy link
Contributor

@baoachun Hi, this PR passed all CIs please review.

@jczaja
Copy link
Contributor Author

jczaja commented Mar 10, 2022

@tsocha , @Silv3S Please help with review. Ask questions if something is not clear

Comment on lines 566 to 567
auto check_attrib = [&](std::unique_ptr<framework::OperatorBase>& op,
const std::string& attrib_name) -> bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto check_attrib = [&](std::unique_ptr<framework::OperatorBase>& op,
const std::string& attrib_name) -> bool {
auto check_attrib = [](std::unique_ptr<framework::OperatorBase>& op,
const std::string& attrib_name) -> bool {

Comment on lines 234 to 250
framework::OpKernelType GetKernelTypeForVar(
const std::string &var_name, const Tensor &tensor,
const framework::OpKernelType &expected_kernel_type) const {
#ifdef PADDLE_WITH_MKLDNN
// All inputs (including alpha) need shape rotating
if ((expected_kernel_type.data_layout_ == framework::DataLayout::kMKLDNN) &&
(tensor.layout() != framework::DataLayout::kMKLDNN) &&
paddle::platform::MKLDNNDeviceContext::tls()
.get_cur_paddle_data_layout() == framework::DataLayout::kNHWC) {
return framework::OpKernelType(expected_kernel_type.data_type_,
tensor.place(),
framework::DataLayout::kNHWC);
}
#endif
return framework::OpKernelType(expected_kernel_type.data_type_,
tensor.place(), tensor.layout());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't repeat yourself, please.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

paddle/fluid/operators/prelu_op.cc Outdated Show resolved Hide resolved
Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lidanqing-intel lidanqing-intel assigned jczaja and unassigned baoachun Mar 10, 2022
tsocha
tsocha previously approved these changes Mar 10, 2022
def is_program_valid(self, program_config: ProgramConfig) -> bool:
attrs = [
program_config.ops[i].attrs
for i in range(len(program_config.ops))
]
# If the problem has been fixed, the judgment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar condition is in test_mkldnn_conv_mish_fuse_pass.py and test_mkldnn_depthwise_conv.py. Should we enable testing NHWC there too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Silv3S This UT was added in #39654 So I just copied it here as suggessted by @lidanqing-intel . I do not intend to extend it in this PR.

quant=False, passes=["conv_elementwise_add_mkldnn_fuse_pass"])


'''
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is whole original unit test commented?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just copied this UT from #39654 as requested so not sure of authors intention

@jczaja jczaja dismissed stale reviews from tsocha and lidanqing-intel via b09b484 March 10, 2022 14:04
Copy link
Contributor

@jakpiase jakpiase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Silv3S
Copy link
Member

Silv3S commented Mar 10, 2022

LGTM

@jczaja jczaja closed this Mar 14, 2022
@jczaja jczaja reopened this Mar 14, 2022
@jczaja
Copy link
Contributor Author

jczaja commented Mar 14, 2022

@baoachun Please review.

@lidanqing-intel lidanqing-intel self-requested a review March 15, 2022 06:11
Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants