Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix handling of nodes inserted by NHWC transformer. #10904

Merged
merged 1 commit into from
Mar 17, 2022

Conversation

edgchen1
Copy link
Contributor

@edgchen1 edgchen1 commented Mar 17, 2022

Description
Addressing issue with loading an ORT format model containing nodes that are directly replaced by the NHWC transformer.

$ onnxruntime_perf_test -I -m times -s -e cpu mobilenet_v2_uint8.ort
...
Could not find an implementation for QLinearConv(1) node with name 'QLinearConv'

Model is from https://onnxruntimeexamplesdata.z13.web.core.windows.net/mobilenet_v2_ort_models.zip

Motivation and Context
Fix issue.

@@ -1280,6 +1280,9 @@ Status AssignNodesToEpsFromHashesImpl(Graph& graph, const fbs::SessionState& fbs
for (const auto& node : graph.Nodes()) {
if (node.GetExecutionProviderType().empty()) {
auto kernel_hash = utils::GetHashValueFromStaticKernelHashMap(node.OpType(), node.SinceVersion());
if (!kernel_hash.has_value()) {
kernel_hash = utils::GetInternalNhwcOpHash(node);
}
if (kernel_hash.has_value()) {
Copy link
Contributor

@skottmckay skottmckay Mar 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh. We really need to fix this setup of doing the same hash lookups in both inference_session and session_state.

Maybe we can move AssignNodesToEpsFromHashes into SessionState as discussed (FinalizeSessionState is called immediately after AssignNodesToEpsFromHashesImpl) and merge it with the logic that does hash lookups there.

Copy link
Contributor

@skottmckay skottmckay Mar 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how GraphRuntimeOptimizationTest.TestNhwcTransformer didn't hit this. That test checks the optimization is applied and calls InferenceSession::Initialize.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will clean up the hash lookup and add/update tests in a separate change so we can cherry-pick this one first.

Copy link
Contributor

@skottmckay skottmckay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@edgchen1 edgchen1 marked this pull request as ready for review March 17, 2022 17:12
@edgchen1 edgchen1 merged commit 07a71d5 into master Mar 17, 2022
@edgchen1 edgchen1 deleted the edgchen1/nhwc_transformer_ort_format_issue branch March 17, 2022 19:41
chilo-ms added a commit that referenced this pull request Mar 18, 2022
* Update to flatbuffers v2.0.0 (#10866)

* Fix Reduced ops pipeline (#10861)

* Fix a couple of issues with the python package tools (#10858)

* Tweaks to the model utils
  * Add handling for a dim_value of -1 when replacing the entire input shape. This occurs in models exported from PaddlePaddle
  * make pytorch helpers accessible in package
  * make QDQ helpers accessible in package

* Fix wrong percentile values returned during calibration (#10847)

* Use numpy.percentile to get the lookup value.

* Use 1.0 as float value rather than integer.

* Add missing cdf parameter for `np.percentile`.

* Use 100. instead of 1.0

* Remove print.

* Update from @yufenglee

* Add support for opset 16 to transpose optimizer. (#10841)

* Add support for opset 16 to transpose optimizer.

Only change required is for GridSample to be added to the layout sensitive ops. The existing handling for layout transpose works with that as the first input and first output are layout sensitive.

Update the optimize to be able to return an error message if it fails.

* Use separate build directories for full and mobile iOS packages. (#10835)

* Address performance issue with abseil flat_hash_table. (#10819)

When returning by value in a cross DLL call, the hash table
even though containing all the entries that are originally there
can not find at least some of them. Reverting to std::unordered_set
pending further investigation.

* Mark end of version 11 C API. (#10803)

* Mark end of version 11 C API

* Add static_assert

* avoid using LocalFree on FormatMessageW buffer (#10796)

* remove local free

* Remove local free from onnxruntime

* don't allocate

* Change to use constexpr to satisfy  CPU build warning

* Integrate C-API tests into Pipelines for release packages (#10794)

* add c-api test for package

* fix bug for running c-api test for package

* refine run application script

* remove redundant code

* include CUDA test

* Remove testing CUDA EP temporarily

* fix bug

* Code refactor

* try to fix YAML bug

* try to fix YAML bug

* try to fix YAML bug

* fix bug for multiple directories in Pipelines

* fix bug

* add comments and fix bug

* Update c-api-noopenmp-packaging-pipelines.yml

* Remove failOnStandardError flag in Pipelines

* Detect runtime CUDA JIT and warn the user (#10781)

* Use cudaMalloc vs cudaDeviceSynchronize and show the total time

* Update convert_onnx_models_to_ort.py to support runtime optimizations. (#10765)

Add runtime optimization support to ONNX -> ORT format conversion script.
Replace `--optimization_level`, `--use_nnapi`, and `--use_coreml` with a new `--optimization_style` option.

* Add multithreading test and put a lock on nvinfer1::createInferRuntime() for TRT EP (#10714)

* Add multithread unit test and put lock on library call

* update code

* remove debug code

* add comment

* add one session multi-threads inference

* Put lock for build engine all the time

* Update naming and comment

* remove unnecessary lock

* Revert "remove unnecessary lock"

This reverts commit 9c2317b.

* Fix handling of nodes inserted by NHWC transformer. (#10904) (#10925)

* Revert "Upsample support NHWC (#10554)" (#10917)

This reverts commit bd08f11.

Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>

* [python API] Change raise import error when `C:\Windows\System32\vcruntime140_1.dll` is not found to warning (#10927)

* remove throw if C:\\Windows\\System32\\vcruntime140_1.dll cannot be found

* Add comments and update warning message

* adding back accidentally removed line

Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>

* [js] Create npm packaging pipeline (#10886)

* create npm packaging pipeline

* fix indentations

* Update npm-packaging-pipeline.yml for Azure Pipelines

* Update npm-packaging-pipeline.yml for Azure Pipelines

* Update npm-packaging-pipeline.yml for Azure Pipelines

* react-native-ci as a template

* fix typos

* fix template paths

* add a depencendy

* change a stage name

* set different artifact name for each package

* fix typo

* Update npm-packaging-pipeline.yml for Azure Pipelines

Set a build Id for node npm package as a parameter

* Update npm-packaging-pipeline.yml for Azure Pipelines

Set a build Id for node npm package as a parameter

* Update npm-packaging-pipeline.yml for Azure Pipelines

* Follow up update for python API checking if `vcruntime140_1.dll` is available (#10927) (#10933)

Co-authored-by: Hariharan Seshadri <hasesh@microsoft.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Funtowicz Morgan <mfuntowicz@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Pranav Sharma <prs@microsoft.com>
Co-authored-by: Ryan Lai <rylai@microsoft.com>
Co-authored-by: Ryan Hill <38674843+RyanUnderhill@users.noreply.github.com>
Co-authored-by: Yi-Hong Lyu <yilyu@microsoft.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
edgchen1 added a commit that referenced this pull request Mar 28, 2022
…sionState() (#10944)

Follow up to #10904.
- Move node EP assignment for ORT format into SessionState::FinalizeSessionState().
- Add unit test for #10904.
- Make convert_onnx_models_to_ort.py optimization level configurable via environment variable.
lavanyax pushed a commit to intel/onnxruntime that referenced this pull request Mar 29, 2022
seddonm1 pushed a commit to seddonm1/onnxruntime that referenced this pull request May 15, 2022
…sionState() (microsoft#10944)

Follow up to microsoft#10904.
- Move node EP assignment for ORT format into SessionState::FinalizeSessionState().
- Add unit test for microsoft#10904.
- Make convert_onnx_models_to_ort.py optimization level configurable via environment variable.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants