-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build(tvm_utility): remove download logic from CMake and update documentation #4923
Changes from 11 commits
c90b228
e77245c
59c590f
deab5e7
d4e0064
a098b48
dac7a20
789ba72
0fa9061
4060e78
726e3a2
f9d23be
a602a34
e2ad796
1c46e02
b218bbe
6eea3e9
dd09064
3cf6108
e82b412
5ed5724
7ba6056
c45ab80
c91c0e2
f145dbc
6d86664
24aee4b
3484369
165255a
0c08dc3
603c163
dd8e24a
971508a
d6992b1
8f2f263
107c939
e360caf
99caa44
eb3b8bb
63e932a
1b22105
4b8471d
3514fd9
c50d1df
770c400
0dd4431
9fced41
a8b65b2
742903f
7999377
ab86d1f
3300af7
81f9aed
70cabe2
0d81afb
d080050
2f190c6
4f7d1fe
3c97359
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1 @@ | ||
artifacts/**/*.jpg | ||
data/ | ||
data/models |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -50,35 +50,17 @@ error description. | |
|
||
### Neural Networks Provider | ||
|
||
This package also provides a utility to get pre-compiled neural networks to packages using them for their inference. | ||
|
||
The neural networks are compiled as part of the | ||
[Model Zoo](https://github.com/autowarefoundation/modelzoo/) CI pipeline and saved to an S3 bucket. | ||
This package exports cmake variables and functions for ease of access to those neural networks. | ||
|
||
The `get_neural_network` function creates an abstraction for the artifact management. | ||
The artifacts are saved under the source directory of the package making use of the function; under "data/". | ||
Priority is given to user-provided files, under "data/user/${MODEL_NAME}/". | ||
If there are no user-provided files, the function tries to reuse previously-downloaded artifacts. | ||
If there are no previously-downloaded artifacts, and if the `DOWNLOAD_ARTIFACTS` cmake variable is set, they will be downloaded from the bucket. | ||
Otherwise, nothing happens. | ||
Users should provide model files under "data/user/${MODEL_NAME}/". Otherwise, nothing happens and compilation of the package will be skipped. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the artifacts are required for compilation, then they should be part of the source tree (via There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually It is a bit confusing for now. Because this packages don't seem to build anyway now because DOWNLOAD_ARTIFACTS flag is disabled by default and this PR kinda show it because last changes was made several months ago and bug was hidden. So here I just follow the same logic. And as I understand before we are going to use ansible to provide artifacts. And for packages with tenosorrt support It is more or less straight forward process as all model conversion from onnx to TRT happens later on the first run not on the build. For TVM it is a bit more complicated as it provide models in already compiled form and to use it in the package you will need to provide some model files to build it. But if I understand you correctly here. You propose that there should be some default artifacts for all TVM packages which should be part of the source tree and the only way to do it is I think there is a kinda similar way to complile the model to TVM before run by the user same as with TRT but I'm not so familiar with TVM. And what is the idea behind the Debian packages? Should user be able to use them of the shelf with the build in model? I guess in many cases user with have to use model trained on his own data. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
As far as I know, we only need the headers for building the packages, not the full models.
The Debian packages are a way for users to install Autoware in production environments without having to build it themselves, much like how ROS is distributed. Eventually, Autoware can become part of the ROS distribution, but that will come at a later stage. |
||
|
||
The structure inside of the source directory of the package making use of the function is as follow: | ||
|
||
```{text} | ||
. | ||
├── data | ||
│ ├── downloads | ||
│ │ ├── ${MODEL 1}-${ARCH 1}-{BACKEND 1}-{VERSION 1}.tar.gz | ||
│ │ ├── ... | ||
│ │ └── ${MODEL ...}-${ARCH ...}-{BACKEND ...}-{VERSION ...}.tar.gz | ||
│ ├── models | ||
│ │ ├── ${MODEL 1} | ||
│ │ │ ├── ... | ||
│ │ │ └── inference_engine_tvm_config.hpp | ||
│ │ ├── ... | ||
│ │ └── ${MODEL ...} | ||
│ │ └── ... | ||
│ └── user | ||
│ ├── ${MODEL 1} | ||
│ │ ├── deploy_graph.json | ||
|
@@ -90,36 +72,21 @@ The structure inside of the source directory of the package making use of the fu | |
│ └── ... | ||
``` | ||
|
||
The `inference_engine_tvm_config.hpp` file needed for compilation by dependent packages is made available under "data/models/${MODEL_NAME}/inference_engine_tvm_config.hpp". | ||
The `inference_engine_tvm_config.hpp` file needed for compilation by dependent packages should be available under "data/models/${MODEL_NAME}/inference_engine_tvm_config.hpp". | ||
Dependent packages can use the cmake `add_dependencies` function with the name provided in the `DEPENDENCY` output parameter of `get_neural_network` to ensure this file is created before it gets used. | ||
|
||
The other `deploy_*` files are installed to "models/${MODEL_NAME}/" under the `share` directory of the package. | ||
|
||
The target version to be downloaded can be overwritten by setting the `MODELZOO_VERSION` cmake variable. | ||
|
||
#### Assumptions / Known limits | ||
|
||
If several packages make use of the same neural network, it will be downloaded once per package. | ||
|
||
In case a requested artifact doesn't exist in the S3 bucket, the error message from ExternalProject is not explicit enough for the user to understand what went wrong. | ||
|
||
In case the user manually sets `MODELZOO_VERSION` to "latest", the archive will not be re-downloaded when it gets updated in the S3 bucket (it is not a problem for tagged versions as they are not expected to be updated). | ||
|
||
#### Inputs / Outputs | ||
|
||
Inputs: | ||
|
||
- `DOWNLOAD_ARTIFACTS` cmake variable; needs to be set to enable downloading the artifacts | ||
- `MODELZOO_VERSION` cmake variable; can be used to overwrite the default target version of downloads | ||
|
||
Outputs: | ||
|
||
- `get_neural_network` cmake function; can be used to get a neural network compiled for a specific backend | ||
- `get_neural_network` cmake function; create proper external dependency for a package with use of the model provided by the user. | ||
|
||
In/Out: | ||
|
||
- The `DEPENDENCY` argument of `get_neural_network` can be checked for the outcome of the function. | ||
It is an empty string when the neural network couldn't be made available. | ||
It is an empty string when the neural network wasn't provided by the user. | ||
|
||
## Security considerations | ||
|
||
|
ambroise-arm marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
// Copyright 2021 Arm Limited and Contributors. | ||
// | ||
// Licensed under the Apache License, Version 2.0 (the "License"); | ||
// you may not use this file except in compliance with the License. | ||
// You may obtain a copy of the License at | ||
// | ||
// http://www.apache.org/licenses/LICENSE-2.0 | ||
// | ||
// Unless required by applicable law or agreed to in writing, software | ||
// distributed under the License is distributed on an "AS IS" BASIS, | ||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
#include "tvm_utility/pipeline.hpp" | ||
|
||
#ifndef COMMON__TVM_UTILITY__DATA__USER__YOLO_V2_TINY__INFERENCE_ENGINE_TVM_CONFIG_HPP_ // NOLINT | ||
#define COMMON__TVM_UTILITY__DATA__USER__YOLO_V2_TINY__INFERENCE_ENGINE_TVM_CONFIG_HPP_ | ||
|
||
namespace model_zoo | ||
{ | ||
namespace perception | ||
{ | ||
namespace camera_obstacle_detection | ||
{ | ||
namespace yolo_v2_tiny | ||
{ | ||
namespace tensorflow_fp32_coco | ||
{ | ||
|
||
static const tvm_utility::pipeline::InferenceEngineTVMConfig config{ | ||
{3, 0, 0}, // modelzoo_version | ||
|
||
"yolo_v2_tiny", // network_name | ||
"llvm", // network_backend | ||
|
||
"deploy_lib.so", // network_module_path | ||
"deploy_graph.json", // network_graph_path | ||
"deploy_param.params", // network_params_path | ||
|
||
kDLCPU, // tvm_device_type | ||
0, // tvm_device_id | ||
|
||
{{"input", kDLFloat, 32, 1, {-1, 416, 416, 3}}}, // network_inputs | ||
|
||
{{"output", kDLFloat, 32, 1, {1, 13, 13, 425}}} // network_outputs | ||
}; | ||
|
||
} // namespace tensorflow_fp32_coco | ||
} // namespace yolo_v2_tiny | ||
} // namespace camera_obstacle_detection | ||
} // namespace perception | ||
} // namespace model_zoo | ||
#endif // COMMON__TVM_UTILITY__DATA__USER__YOLO_V2_TINY__INFERENCE_ENGINE_TVM_CONFIG_HPP_ | ||
// NOLINT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: This file can be removed if it's empty.