Skip to content

Commit

Permalink
Replace buck with cmake in docs
Browse files Browse the repository at this point in the history
Summary:
Per: https://docs.google.com/spreadsheets/d/1PoJt7P9qMkFSaMmS9f9j8dVcTFhOmNHotQYpwBySydI/edit#gid=0

We are also deprecating buck in docs from Gasoonjia

Differential Revision: D57795491
  • Loading branch information
mcr229 authored and facebook-github-bot committed May 24, 2024
1 parent 5e9db9a commit d4cd0f4
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 14 deletions.
12 changes: 1 addition & 11 deletions docs/source/tutorial-xnnpack-delegate-lowering.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ cmake \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_ENABLE_LOGGING=1 \
-DEXECUTORCH_ENABLE_LOGGING=ON \
-DPYTHON_EXECUTABLE=python \
-Bcmake-out .
```
Expand All @@ -169,15 +169,5 @@ Now you should be able to find the executable built at `./cmake-out/backends/xnn
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_q8.pte
```


## Running the XNNPACK Model with Buck
Alternatively, you can use `buck2` to run the `.pte` file with XNNPACK delegate instructions in it on your host platform. You can follow the instructions here to install [buck2](getting-started-setup.md#Build-&-Run). You can now run it with the prebuilt `xnn_executor_runner` provided in the examples. This will run the model on some sample inputs.

```bash
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_fp32.pte
# or to run the quantized variant
buck2 run examples/xnnpack:xnn_executor_runner -- --model_path ./mv2_xnnpack_q8.pte
```

## Building and Linking with the XNNPACK Backend
You can build the XNNPACK backend [BUCK target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/targets.bzl#L54) and [CMake target](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/CMakeLists.txt#L83), and link it with your application binary such as an Android or iOS application. For more information on this you may take a look at this [resource](demo-apps-android.md) next.
31 changes: 28 additions & 3 deletions examples/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,12 +88,37 @@ You can find more valid quantized example models by running:
python3 -m examples.xnnpack.quantization.example --help
```

A quantized model can be run via `executor_runner`:
## Running the XNNPACK Model with CMake
After exporting the XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build and use the xnn_executor_runner, which is a sample wrapper for the ExecuTorch Runtime and XNNPACK Backend. We first begin by configuring the CMake build like such:
```bash
buck2 run examples/portable/executor_runner:executor_runner -- --model_path ./mv2_quantized.pte
# cd to the root of executorch repo
cd executorch

# Get a clean cmake-out directory
rm- -rf cmake-out
mkdir cmake-out

# Configure cmake
cmake \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
-DEXECUTORCH_BUILD_XNNPACK=ON \
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
-DEXECUTORCH_ENABLE_LOGGING=ON \
-DPYTHON_EXECUTABLE=python \
-Bcmake-out .
```
Please note that running a quantized model will require the presence of various quantized/dequantize operators in the [quantized kernel lib](../../kernels/quantized).
Then you can build the runtime componenets with

```bash
cmake --build cmake-out -j9 --target install --config Release
```

Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_runner` you can run the executable with the model you generated as such
```bash
./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_quantized.pte
```

## Delegating a Quantized Model

Expand Down

0 comments on commit d4cd0f4

Please sign in to comment.