Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Extended ReduceMax by opsets 13,18,20 #23475

Merged
merged 5 commits into from
Mar 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/frontends/onnx/frontend/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ ov_add_frontend(NAME onnx
FILEDESCRIPTION "FrontEnd to load and convert ONNX file format"
LINK_LIBRARIES openvino_onnx_common openvino::core::dev)

set(ONNX_OPSET_VERSION 18 CACHE INTERNAL "Supported version of ONNX operator set")
set(ONNX_OPSET_VERSION 20 CACHE INTERNAL "Supported version of ONNX operator set")
target_compile_definitions(${TARGET_NAME} PRIVATE ONNX_OPSET_VERSION=${ONNX_OPSET_VERSION})

ov_ncc_naming_style(FOR_TARGET ${TARGET_NAME}
Expand Down
46 changes: 45 additions & 1 deletion src/frontends/onnx/frontend/src/op/reduce.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
#include "identity.hpp"
#include "openvino/frontend/exception.hpp"
#include "openvino/op/constant.hpp"
#include "openvino/op/convert.hpp"
#include "openvino/op/exp.hpp"
#include "openvino/op/log.hpp"
#include "openvino/op/multiply.hpp"
Expand Down Expand Up @@ -94,6 +95,27 @@ const std::set<element::Type> supported_types_v1 =
{element::u32, element::u64, element::i32, element::i64, element::f16, element::f32, element::f64};
const std::set<element::Type> supported_types_v2 =
{element::u32, element::u64, element::i32, element::i64, element::f16, element::f32, element::f64, element::bf16};
const std::set<element::Type> supported_types_v3 = {element::u32,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,
I'm working on new opsets for ReduceLogSumExp, and would like to take this PR as example.
I'm wondering whether inventing new versioning scheme for types is a good idea ? Wouldn't it be better to just put the constant supported_types into a namespace ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mwilczy you mean why I didn't align it with an opset version? For example, supported_types_v1, v11, v13, etc. Or op::set_1::supported_types}, op::set_11::supported_types, etc. Did I get correct? I thought about it at first step but...

I motivated to do it as I did because I compared all differences as a table, and I found by text comparison between all versions of all Reduce* operations. A summary:

Types versioning isn't aligned between Reduce* versions. For example, ReduceMax/Min-13 has different list of supported types (added int8 and uint8) than all other Reduce*-13 operations. as I can see - opset-13 simultaneously introduced two different lists of supported types. I really don't like constructions like op::set_13::supported_types::v1 ;)

So, supported_types for ReduceMax/Min will look different. And in such case I may need to introduce more intermediate entities like: define four different lists in a common part and use it thru... "using support_types = supported_types_v3" for each Reduce*? I think it will overcomplicate the source without any benefits for reading. In my opinion it could be reasonable in case we would separate Reduce* operations for a set of files (maybe it will be done when we will have all implementations ready).

Hope, I've answered on your question...

element::u64,
element::i32,
element::i64,
element::f16,
element::f32,
element::f64,
element::bf16,
element::i8,
element::u8};
const std::set<element::Type> supported_types_v4 = {element::u32,
element::u64,
element::i32,
element::i64,
element::f16,
element::f32,
element::f64,
element::bf16,
element::i8,
element::u8,
element::boolean};

template <typename OpType>
std::shared_ptr<ov::Node> make_ov_reduction_op(const Node& node,
Expand Down Expand Up @@ -177,11 +199,33 @@ namespace set_13 {
ov::OutputVector reduce_sum(const ov::frontend::onnx::Node& node) {
return {make_ov_reduction_op<v1::ReduceSum>(node, node.get_ov_inputs().at(0), supported_types_v2, false)};
}
ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node) {
return {make_ov_reduction_op<v1::ReduceMax>(node, node.get_ov_inputs().at(0), supported_types_v3)};
}
} // namespace set_13

namespace set_18 {
// Placeholder
ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node) {
return {make_ov_reduction_op<v1::ReduceMax>(node, node.get_ov_inputs().at(0), supported_types_v3, false)};
}
} // namespace set_18

namespace set_20 {
ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node) {
auto data = node.get_ov_inputs().at(0);
if (data.get_element_type() != element::boolean) {
return {make_ov_reduction_op<v1::ReduceMax>(node, data, supported_types_v3, false)};
} else {
// Handling boolean as a uint8
return {std::make_shared<v0::Convert>(
make_ov_reduction_op<v1::ReduceMax>(node,
std::make_shared<ov::op::v0::Convert>(data, element::u8),
supported_types_v4,
false),
element::boolean)};
}
}
} // namespace set_20
} // namespace op
} // namespace onnx
} // namespace frontend
Expand Down
9 changes: 9 additions & 0 deletions src/frontends/onnx/frontend/src/op/reduce.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,15 @@ ov::OutputVector reduce_l2(const ov::frontend::onnx::Node& node);
namespace set_1 {
ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node);
} // namespace set_1
namespace set_13 {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This operator is also present in opset version 11. Is there any reason we're skipping version 11 ? In my case for ReduceLogSumExp there is no change between version 1 and 11, however maybe it would be good to include it for compatibility reasons ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mwilczy , I put a common motivation to do not introduce opset-11 for all Reduce* here

ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node);
} // namespace set_13
namespace set_18 {
ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node);
} // namespace set_18
namespace set_20 {
ov::OutputVector reduce_max(const ov::frontend::onnx::Node& node);
} // namespace set_20

namespace set_1 {
ov::OutputVector reduce_mean(const ov::frontend::onnx::Node& node);
Expand Down
3 changes: 3 additions & 0 deletions src/frontends/onnx/frontend/src/ops_bridge.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -484,6 +484,9 @@ OperatorsBridge::OperatorsBridge() {
REGISTER_OPERATOR("ReduceL1", 1, reduce_l1);
REGISTER_OPERATOR("ReduceL2", 1, reduce_l2);
REGISTER_OPERATOR("ReduceMax", 1, reduce_max);
REGISTER_OPERATOR("ReduceMax", 13, reduce_max);
REGISTER_OPERATOR("ReduceMax", 18, reduce_max);
REGISTER_OPERATOR("ReduceMax", 20, reduce_max);
REGISTER_OPERATOR("ReduceMean", 1, reduce_mean);
REGISTER_OPERATOR("ReduceMin", 1, reduce_min);
REGISTER_OPERATOR("ReduceProd", 1, reduce_prod);
Expand Down
2 changes: 0 additions & 2 deletions src/frontends/onnx/tests/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ def xfail_test(reason="Mark the test as expected to fail", strict=True):
xfail_issue_99954 = xfail_test(reason="Constant Pad - RuntimeError: Shape inference of Reference node with name y failed")
xfail_issue_99955 = xfail_test(reason="GroupNorm is not supported")
xfail_issue_99957 = xfail_test(reason="LayerNorm - RuntimeError: While validating node '<Node(Reshape): Mean>'")
xfail_issue_99958 = xfail_test(reason="LogSoftmax - Results mismatch")
xfail_issue_99960 = xfail_test(reason="MVN - Results mismatch")
xfail_issue_99961 = xfail_test(reason="Optional has/get element operators are not supported)'")
xfail_issue_99962 = pytest.mark.skip(reason="ReduceL1/L2 - Unrecognized attribute: axes for operator ReduceL1/L2")
Expand All @@ -71,7 +70,6 @@ def xfail_test(reason="Mark the test as expected to fail", strict=True):
xfail_issue_99970 = xfail_test(reason="Scatter and ScatterND - RuntimeError: Check '(reduction == none)' failed at "
"src/frontends/onnx/frontend/src/op/scatter_elements.cpp OR at "
"src/frontends/onnx/frontend/src/op/scatter_nd")
xfail_issue_99972 = xfail_test(reason="Softmax - Results mismatch")
xfail_issue_99973 = xfail_test(reason="Split - RuntimeError: While validating ONNX node "
"'<Node(Split): output_1, output_2, output_3, output_4>'")
xfail_issue_38710 = xfail_test(reason="RuntimeError: data has zero dimension which is not allowed")
Expand Down
68 changes: 68 additions & 0 deletions src/frontends/onnx/tests/models/reduce_max_18.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
ir_version: 3
producer_name: "OpenVINO ONNX Frontend"
graph {
node {
input: "A"
input: "axes"
output: "B"
op_type: "ReduceMax"
}
name: "compute_graph"
initializer {
data_type: 6
dims: 1
name: "axes"
raw_data: "\002\000\000\000"
}
input {
name: "A"
type {
tensor_type {
elem_type: 2
shape {
dim {
dim_value: 1
}
dim {
dim_value: 1
}
dim {
dim_value: 4
}
dim {
dim_value: 4
}
}
}
}
}
input {
name: "axes"
type {
tensor_type {
elem_type: 6
shape {
dim {
dim_value: 1
}
}
}
}
}
output {
name: "B"
type {
tensor_type {
elem_type: 2
shape {
dim {
dim_value: 1
}
}
}
}
}
}
opset_import {
version: 18
}
48 changes: 48 additions & 0 deletions src/frontends/onnx/tests/models/reduce_wrong_type_v3.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
ir_version: 3
producer_name: "OpenVINO ONNX Frontend"
graph {
node {
input: "A"
output: "B"
op_type: "ReduceMax"
}
name: "compute_graph"
input {
name: "A"
type {
tensor_type {
elem_type: 9
shape {
dim {
dim_value: 1
}
dim {
dim_value: 1
}
dim {
dim_value: 4
}
dim {
dim_value: 4
}
}
}
}
}
output {
name: "B"
type {
tensor_type {
elem_type: 9
shape {
dim {
dim_value: 1
}
}
}
}
}
}
opset_import {
version: 13
}
48 changes: 48 additions & 0 deletions src/frontends/onnx/tests/models/reduce_wrong_type_v4.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
ir_version: 3
producer_name: "OpenVINO ONNX Frontend"
graph {
node {
input: "A"
output: "B"
op_type: "ReduceMax"
}
name: "compute_graph"
input {
name: "A"
type {
tensor_type {
elem_type: 9
shape {
dim {
dim_value: 1
}
dim {
dim_value: 1
}
dim {
dim_value: 4
}
dim {
dim_value: 4
}
}
}
}
}
output {
name: "B"
type {
tensor_type {
elem_type: 9
shape {
dim {
dim_value: 1
}
}
}
}
}
}
opset_import {
version: 20
}
22 changes: 22 additions & 0 deletions src/frontends/onnx/tests/onnx_import.in.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -974,6 +974,28 @@ OPENVINO_TEST(${BACKEND_NAME}, onnx_model_reduce_max) {
test_case.run();
}

OPENVINO_TEST(${BACKEND_NAME}, onnx_model_reduce_max_18) {
// TEMPLATE plugin has an issue with evaluation for u8 type
if (std::string("${BACKEND_NAME}") == std::string("INTERPRETER")) {
GTEST_SKIP();
}

auto model = convert_model("reduce_max_18.onnx");

// input data shape (1, 1, 4, 4)
std::vector<std::vector<uint8_t>> inputs{
ov::test::NDArray<uint8_t, 4>({{{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}, {13, 14, 15, 16}}}})
.get_vector()};

// output data shape (1,)
auto expected_output = ov::test::NDArray<uint8_t, 1>({13, 14, 15, 16}).get_vector();

auto test_case = ov::test::TestCase(model, s_device);
test_case.add_multiple_inputs(inputs);
test_case.add_expected_output(expected_output);
test_case.run();
}

OPENVINO_TEST(${BACKEND_NAME}, onnx_model_reduce_max_invalid_axes) {
EXPECT_THROW(convert_model("reduce_max_invalid_axes.onnx"), ov::Exception);
}
Expand Down
21 changes: 21 additions & 0 deletions src/frontends/onnx/tests/onnx_import_exceptions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -92,3 +92,24 @@ TEST(onnx_importer, exception_msg_onnx_reduce_wrong_type_v2) {
FAIL() << "The ONNX model importer failed for unexpected reason";
}
}

TEST(onnx_importer, exception_msg_onnx_reduce_wrong_type_v3) {
try {
convert_model("reduce_wrong_type_v3.onnx");
// Should have thrown, so fail if it didn't
FAIL() << "ONNX Importer did not detected incorrect model!";
} catch (const ::ov::Exception& e) {
EXPECT_HAS_SUBSTRING(e.what(), std::string("Unsupported input type boolean"));
}
// On MacOS after we re-throw ov::Exception exception, we couldn't catch it as is,
// thus below workaround.
catch (const std::exception& e) {
EXPECT_HAS_SUBSTRING(e.what(), std::string("Unsupported input type boolean"));
} catch (...) {
FAIL() << "The ONNX model importer failed for unexpected reason";
}
}

TEST(onnx_importer, no_exception_onnx_reduce_wrong_type_v4) {
EXPECT_NO_THROW(convert_model("reduce_wrong_type_v4.onnx"));
}
17 changes: 0 additions & 17 deletions src/frontends/onnx/tests/tests_python/test_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,11 @@
xfail_issue_99954,
xfail_issue_99955,
xfail_issue_99957,
xfail_issue_99958,
xfail_issue_99960,
xfail_issue_99961,
xfail_issue_99968,
xfail_issue_99969,
xfail_issue_99970,
xfail_issue_99972,
xfail_issue_99973,
xfail_issue_101965,
xfail_issue_113506,
Expand Down Expand Up @@ -453,10 +451,6 @@ def expect_fail(test_case_path, xfail): # type: (str) -> None
"OnnxBackendNodeModelTest.test_layer_normalization_4d_axis_negative_3_expanded_ver18_cpu",
"OnnxBackendNodeModelTest.test_layer_normalization_default_axis_expanded_ver18_cpu",
),
(
xfail_issue_99958,
"OnnxBackendNodeModelTest.test_logsoftmax_large_number_expanded_ver18_cpu",
),
(
xfail_issue_99960,
"OnnxBackendNodeModelTest.test_mvn_expanded_ver18_cpu",
Expand Down Expand Up @@ -499,12 +493,6 @@ def expect_fail(test_case_path, xfail): # type: (str) -> None
"OnnxBackendNodeModelTest.test_reduce_log_sum_exp_negative_axes_keepdims_example_cpu",
"OnnxBackendNodeModelTest.test_reduce_log_sum_exp_keepdims_random_cpu",
"OnnxBackendNodeModelTest.test_reduce_log_sum_negative_axes_cpu",
"OnnxBackendNodeModelTest.test_reduce_max_do_not_keepdims_example_cpu",
"OnnxBackendNodeModelTest.test_reduce_max_do_not_keepdims_random_cpu",
"OnnxBackendNodeModelTest.test_reduce_max_keepdims_example_cpu",
"OnnxBackendNodeModelTest.test_reduce_max_keepdims_random_cpu",
"OnnxBackendNodeModelTest.test_reduce_max_negative_axes_keepdims_example_cpu",
"OnnxBackendNodeModelTest.test_reduce_max_negative_axes_keepdims_random_cpu",
"OnnxBackendNodeModelTest.test_reduce_mean_do_not_keepdims_example_cpu",
"OnnxBackendNodeModelTest.test_reduce_log_sum_exp_negative_axes_keepdims_random_cpu",
"OnnxBackendNodeModelTest.test_reduce_mean_do_not_keepdims_random_cpu",
Expand Down Expand Up @@ -552,10 +540,6 @@ def expect_fail(test_case_path, xfail): # type: (str) -> None
"OnnxBackendNodeModelTest.test_scatternd_max_cpu",
"OnnxBackendNodeModelTest.test_scatternd_min_cpu",
),
(
xfail_issue_99972,
"OnnxBackendNodeModelTest.test_softmax_large_number_expanded_ver18_cpu",
),
(
xfail_issue_99973,
"OnnxBackendNodeModelTest.test_split_1d_uneven_split_opset18_cpu",
Expand Down Expand Up @@ -732,7 +716,6 @@ def expect_fail(test_case_path, xfail): # type: (str) -> None
),
(
xfail_issue_125495,
"OnnxBackendNodeModelTest.test_reduce_max_bool_inputs_cpu",
"OnnxBackendNodeModelTest.test_reduce_min_bool_inputs_cpu",
),
(
Expand Down
Loading