From bb9ab02194bb852cf093b3d0c643df3a6234d7f6 Mon Sep 17 00:00:00 2001 From: Max Ren Date: Fri, 24 May 2024 13:53:30 -0700 Subject: [PATCH] Replace buck with cmake in docs (#3739) Summary: Per: https://docs.google.com/spreadsheets/d/1PoJt7P9qMkFSaMmS9f9j8dVcTFhOmNHotQYpwBySydI/edit#gid=0 We are also deprecating buck in docs from Gasoonjia Differential Revision: D57795491 --- .../tutorial-xnnpack-delegate-lowering.md | 12 +------ examples/xnnpack/README.md | 31 +++++++++++++++++-- 2 files changed, 29 insertions(+), 14 deletions(-) diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index b82a1a9f25..31f2b87b19 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -152,7 +152,7 @@ cmake \ -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ -DEXECUTORCH_BUILD_XNNPACK=ON \ -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ - -DEXECUTORCH_ENABLE_LOGGING=1 \ + -DEXECUTORCH_ENABLE_LOGGING=ON \ -DPYTHON_EXECUTABLE=python \ -Bcmake-out . ``` @@ -169,15 +169,5 @@ Now you should be able to find the executable built at `./cmake-out/backends/xnn ./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_q8.pte ``` - -## Running the XNNPACK Model with Buck -Alternatively, you can use `buck2` to run the `.pte` file with XNNPACK delegate instructions in it on your host platform. You can follow the instructions here to install [buck2](getting-started-setup.md#Build-&-Run). You can now run it with the prebuilt `xnn_executor_runner` provided in the examples. This will run the model on some sample inputs. - -```bash -buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_fp32.pte -# or to run the quantized variant -buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_q8.pte -``` - ## Building and Linking with the XNNPACK Backend You can build the XNNPACK backend [BUCK target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/targets.bzl#L54) and [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next. diff --git a/examples/xnnpack/README.md b/examples/xnnpack/README.md index b5be3532ea..d5c19882ba 100644 --- a/examples/xnnpack/README.md +++ b/examples/xnnpack/README.md @@ -88,12 +88,37 @@ You can find more valid quantized example models by running: python3 -m examples.xnnpack.quantization.example --help ``` -A quantized model can be run via `executor_runner`: +## Running the XNNPACK Model with CMake +After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such: ```bash -buck2 run examples/portable/executor_runner:executor_runner -- --model_path ./mv2_quantized.pte +# cd to the root of executorch repo +cd executorch + +# Get a clean cmake-out directory +rm- -rf cmake-out +mkdir cmake-out + +# Configure cmake +cmake \ + -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Release \ + -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ + -DEXECUTORCH_BUILD_XNNPACK=ON \ + -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ + -DEXECUTORCH_ENABLE_LOGGING=ON \ + -DPYTHON_EXECUTABLE=python \ + -Bcmake-out . ``` -Please note that running a quantized model will require the presence of various quantized/dequantize operators in the [quantized kernel lib](../../kernels/quantized). +Then you can build the runtime componenets with +```bash +cmake --build cmake-out -j9 --target install --config Release +``` + +Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such +```bash +./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_quantized.pte +``` ## Delegating a Quantized Model