Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update python code to use flatbuffers 2.0 for TRT EP #10866

Merged
merged 2 commits into from
Mar 16, 2022

Conversation

chilo-ms
Copy link
Contributor

@chilo-ms chilo-ms commented Mar 14, 2022

Description: ORT python package doesn't pin flatbuffers version, therefore the latest 2.0 version will be used. This will make our python script which uses the older 1.12 version to fail due to the breaking API change.
After the discussion, we decided to update flatbuffers to 2.0 only on the python side and stick to 1.12 on C++ side.

Motivation and Context

@chilo-ms chilo-ms requested review from yufenglee and jywu-msft March 14, 2022 23:06
builder.StartObject(2)


"""This method is deprecated. Please switch to Start."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds like we will need to move away from the deprecated api's ? (KeyValueAddKey, KeyValueAddValue etc.)
if not in this PR, then let's add an item to clean it up later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is automatically generated by flatbuffers compiler with the schema:
https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/object_detection/trt/yolov3/trt_cal_table.fbs
I'm still figuring out how to generate the include files without deprecated messages.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

understood.
I just noticed we are still using the deprecated api's in quant_utils.py is all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, from flatbuffers's example code, it has the similar deprecated messages
https://github.com/google/flatbuffers/blob/master/tests/MyGame/Example2/Monster.py#L21

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it's good they give us heads up on what api's to use/not use.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

understood. I just noticed we are still using the deprecated api's in quant_utils.py is all.

Yes, I noticed as well. We should refactor the code in another PR.

@chilo-ms chilo-ms merged commit ce204d0 into master Mar 16, 2022
@chilo-ms chilo-ms deleted the update_flatbuffers2.0 branch March 16, 2022 16:18
chilo-ms added a commit that referenced this pull request Mar 16, 2022
chilo-ms added a commit that referenced this pull request Mar 18, 2022
* Update to flatbuffers v2.0.0 (#10866)

* Fix Reduced ops pipeline (#10861)

* Fix a couple of issues with the python package tools (#10858)

* Tweaks to the model utils
  * Add handling for a dim_value of -1 when replacing the entire input shape. This occurs in models exported from PaddlePaddle
  * make pytorch helpers accessible in package
  * make QDQ helpers accessible in package

* Fix wrong percentile values returned during calibration (#10847)

* Use numpy.percentile to get the lookup value.

* Use 1.0 as float value rather than integer.

* Add missing cdf parameter for `np.percentile`.

* Use 100. instead of 1.0

* Remove print.

* Update from @yufenglee

* Add support for opset 16 to transpose optimizer. (#10841)

* Add support for opset 16 to transpose optimizer.

Only change required is for GridSample to be added to the layout sensitive ops. The existing handling for layout transpose works with that as the first input and first output are layout sensitive.

Update the optimize to be able to return an error message if it fails.

* Use separate build directories for full and mobile iOS packages. (#10835)

* Address performance issue with abseil flat_hash_table. (#10819)

When returning by value in a cross DLL call, the hash table
even though containing all the entries that are originally there
can not find at least some of them. Reverting to std::unordered_set
pending further investigation.

* Mark end of version 11 C API. (#10803)

* Mark end of version 11 C API

* Add static_assert

* avoid using LocalFree on FormatMessageW buffer (#10796)

* remove local free

* Remove local free from onnxruntime

* don't allocate

* Change to use constexpr to satisfy  CPU build warning

* Integrate C-API tests into Pipelines for release packages (#10794)

* add c-api test for package

* fix bug for running c-api test for package

* refine run application script

* remove redundant code

* include CUDA test

* Remove testing CUDA EP temporarily

* fix bug

* Code refactor

* try to fix YAML bug

* try to fix YAML bug

* try to fix YAML bug

* fix bug for multiple directories in Pipelines

* fix bug

* add comments and fix bug

* Update c-api-noopenmp-packaging-pipelines.yml

* Remove failOnStandardError flag in Pipelines

* Detect runtime CUDA JIT and warn the user (#10781)

* Use cudaMalloc vs cudaDeviceSynchronize and show the total time

* Update convert_onnx_models_to_ort.py to support runtime optimizations. (#10765)

Add runtime optimization support to ONNX -> ORT format conversion script.
Replace `--optimization_level`, `--use_nnapi`, and `--use_coreml` with a new `--optimization_style` option.

* Add multithreading test and put a lock on nvinfer1::createInferRuntime() for TRT EP (#10714)

* Add multithread unit test and put lock on library call

* update code

* remove debug code

* add comment

* add one session multi-threads inference

* Put lock for build engine all the time

* Update naming and comment

* remove unnecessary lock

* Revert "remove unnecessary lock"

This reverts commit 9c2317b.

* Fix handling of nodes inserted by NHWC transformer. (#10904) (#10925)

* Revert "Upsample support NHWC (#10554)" (#10917)

This reverts commit bd08f11.

Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>

* [python API] Change raise import error when `C:\Windows\System32\vcruntime140_1.dll` is not found to warning (#10927)

* remove throw if C:\\Windows\\System32\\vcruntime140_1.dll cannot be found

* Add comments and update warning message

* adding back accidentally removed line

Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>

* [js] Create npm packaging pipeline (#10886)

* create npm packaging pipeline

* fix indentations

* Update npm-packaging-pipeline.yml for Azure Pipelines

* Update npm-packaging-pipeline.yml for Azure Pipelines

* Update npm-packaging-pipeline.yml for Azure Pipelines

* react-native-ci as a template

* fix typos

* fix template paths

* add a depencendy

* change a stage name

* set different artifact name for each package

* fix typo

* Update npm-packaging-pipeline.yml for Azure Pipelines

Set a build Id for node npm package as a parameter

* Update npm-packaging-pipeline.yml for Azure Pipelines

Set a build Id for node npm package as a parameter

* Update npm-packaging-pipeline.yml for Azure Pipelines

* Follow up update for python API checking if `vcruntime140_1.dll` is available (#10927) (#10933)

Co-authored-by: Hariharan Seshadri <hasesh@microsoft.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Funtowicz Morgan <mfuntowicz@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Pranav Sharma <prs@microsoft.com>
Co-authored-by: Ryan Lai <rylai@microsoft.com>
Co-authored-by: Ryan Hill <38674843+RyanUnderhill@users.noreply.github.com>
Co-authored-by: Yi-Hong Lyu <yilyu@microsoft.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
lavanyax pushed a commit to intel/onnxruntime that referenced this pull request Mar 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants