Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bell/fix onnx multi output failure #10

Open
wants to merge 20 commits into
base: river/fix_segmentfault_plugin_api_2.0
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
0baadc5
change default value
songbell Nov 21, 2022
8eb2e47
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Nov 24, 2022
5aa4e7b
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Nov 29, 2022
8438661
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Dec 5, 2022
6b6484b
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Dec 27, 2022
b956852
fix case failure
songbell Dec 27, 2022
fc85ad3
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Feb 28, 2023
00141b6
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Mar 9, 2023
d2c2f29
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Jun 21, 2023
f74f417
fix post commit failure
songbell Jun 21, 2023
968d0b9
Merge branch 'master' of https://github.com/openvinotoolkit/openvino
songbell Jun 29, 2023
62a5d72
fix sdl issues
songbell Jun 29, 2023
a0228a5
Merge branch 'master' into bell/fix_sdl_issues
songbell Jun 29, 2023
6be030b
Fixed SpaceToBatch and BatchToSpace for 3d case (#18033)
steve-y Jul 3, 2023
cb8d34d
[DOCS] adjustments for ST and cookie policy (#18315)
kblaszczak-intel Jul 3, 2023
36fedf8
test onnx failures, reuse port tensors
songbell Jul 3, 2023
d0d16b6
Merge branch 'river/fix_segmentfault_plugin_api_2.0' of https://githu…
songbell Jul 3, 2023
28bc478
Merge branch 'master' of https://github.com/openvinotoolkit/openvino …
songbell Jul 3, 2023
efaaa58
do not introduce new name
songbell Jul 3, 2023
e846c4b
clang
songbell Jul 3, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/inference/src/dev/isync_infer_request.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -263,10 +263,10 @@ void ov::ISyncInferRequest::allocate_tensor(const ov::Output<const ov::Node>& po
void ov::ISyncInferRequest::check_tensors() const {
const auto& inputs = m_compiled_model->inputs();
for (size_t i = 0; i < inputs.size(); i++) {
check_tensor(inputs[i], m_tensors.at(inputs[i].get_tensor_ptr()));
check_tensor(inputs[i], get_ref_tensor(inputs[i]));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_ref_tensor() can avoid tensor mismatch issue, but m_tensors.at(inputs[i].get_tensor_ptr()) itself should not have any problem?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can see that the compiled_mode are different:

  1. original model:
    image

  2. IF not create input/output port, the final compile model:
    image

  3. If create input/output port, the final compile model, do in this PR, the final compile model will keep no change after tranformation

image

It seems that create input/output port will effect on the transformation, it is surprising.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you remove the lines of set_names?

}
const auto& outputs = m_compiled_model->outputs();
for (size_t i = 0; i < outputs.size(); i++) {
check_tensor(outputs[i], m_tensors.at(outputs[i].get_tensor_ptr()));
check_tensor(outputs[i], get_ref_tensor(outputs[i]));
}
}
6 changes: 3 additions & 3 deletions src/plugins/auto/src/common.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,9 @@ struct DeviceInformation {
DeviceName unique_name;
unsigned int device_priority;
DeviceInformation(DeviceName dn = {}, ov::AnyMap conf = {},
int nReq = -1, std::string defaultID = {}, DeviceName uName = {}, unsigned int priority = 0)
: device_name(dn), config(conf),
num_requests_per_devices(nReq), default_device_id(defaultID), unique_name(uName), device_priority(priority)
int n_req = -1, std::string default_id = {}, DeviceName name = {}, unsigned int priority = 0)
: device_name(std::move(dn)), config(std::move(conf)),
num_requests_per_devices(n_req), default_device_id(std::move(default_id)), unique_name(std::move(name)), device_priority(priority)
{}
};

Expand Down
5 changes: 2 additions & 3 deletions src/plugins/auto/src/plugin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -282,8 +282,7 @@ ov::Any Plugin::get_property(const std::string& name, const ov::AnyMap& argument
auto ret = m_plugin_config.supported_properties(get_device_name());
return ret;
} else if (name == ov::device::full_name) {
std::string device_name = { get_device_name() };
return decltype(ov::device::full_name)::value_type {device_name};
return decltype(ov::device::full_name)::value_type {get_device_name()};
} else if (name == ov::device::capabilities.name()) {
auto device_list = get_core()->get_available_devices();
std::vector<std::string> capabilities;
Expand Down Expand Up @@ -538,7 +537,7 @@ ov::SupportedOpsMap Plugin::query_model(const std::shared_ptr<const ov::Model>&
queryconfig.apply_user_properties();
auto full_property = queryconfig.get_full_properties();
auto priorities = full_property.find(ov::device::priorities.name());
if (!priorities->second.empty()) {
if (priorities!= full_property.end() && !priorities->second.empty()) {
auto meta_devices = parse_meta_devices(priorities->second.as<std::string>(), full_property);
std::unordered_set<std::string> supported_layers;
for (auto&& value : meta_devices) {
Expand Down