diff --git a/backends/apple/mps/setup.md b/backends/apple/mps/setup.md index b2006b474d..3014d395c2 100644 --- a/backends/apple/mps/setup.md +++ b/backends/apple/mps/setup.md @@ -60,7 +60,7 @@ In order to be able to successfully build and run a model using the MPS backend ## Setting up Developer Environment -***Step 1.*** Please finish tutorial [Setting up executorch](getting-started-setup.md). +***Step 1.*** Please finish tutorial [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup). ***Step 2.*** Install dependencies needed to lower MPS delegate: diff --git a/examples/README.md b/examples/README.md index 63f3d853ab..a7c8f77083 100644 --- a/examples/README.md +++ b/examples/README.md @@ -70,4 +70,4 @@ You will find demos of [ExecuTorch SDK](./sdk/) in the [`sdk/`](./sdk/) director ## Dependencies -Various models and workflows listed in this directory have dependencies on some other packages. You need to follow the setup guide in [Setting up ExecuTorch from GitHub](../docs/source/getting-started-setup.md) to have appropriate packages installed. +Various models and workflows listed in this directory have dependencies on some other packages. You need to follow the setup guide in [Setting up ExecuTorch from GitHub](https://pytorch.org/executorch/stable/getting-started-setup) to have appropriate packages installed. diff --git a/examples/apple/coreml/README.md b/examples/apple/coreml/README.md index eb6dfe903d..675d5fe67c 100644 --- a/examples/apple/coreml/README.md +++ b/examples/apple/coreml/README.md @@ -15,7 +15,7 @@ coreml We will walk through an example model to generate a **CoreML** delegated binary file from a python `torch.nn.module` then we will use the `coreml/executor_runner` to run the exported binary file. -1. Following the setup guide in [Setting Up ExecuTorch](/docs/source/getting-started-setup.md) +1. Following the setup guide in [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) you should be able to get the basic development environment for ExecuTorch working. 2. Run `install_requirements.sh` to install dependencies required by the **CoreML** backend. diff --git a/examples/apple/mps/README.md b/examples/apple/mps/README.md index bea3ead90e..848288ff65 100644 --- a/examples/apple/mps/README.md +++ b/examples/apple/mps/README.md @@ -8,7 +8,7 @@ This README gives some examples on backend-specific model workflow. ## Prerequisite Please finish the following tutorials: -- [Setting up executorch](../../../docs/website/docs/tutorials/00_setting_up_executorch.md). +- [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup). - [Setting up MPS backend](../../../backends/apple/mps/setup.md). ## Delegation to MPS backend diff --git a/examples/demo-apps/android/ExecuTorchDemo/README.md b/examples/demo-apps/android/ExecuTorchDemo/README.md index aa2adbd8a8..7776be2b59 100644 --- a/examples/demo-apps/android/ExecuTorchDemo/README.md +++ b/examples/demo-apps/android/ExecuTorchDemo/README.md @@ -14,7 +14,7 @@ This guide explains how to setup ExecuTorch for Android using a demo app. The ap :::{grid-item-card} Prerequisites :class-card: card-prerequisites -* Refer to [Setting up ExecuTorch](getting-started-setup.md) to set up the repo and dev environment. +* Refer to [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment. * Download and install [Android Studio and SDK](https://developer.android.com/studio). * Supported Host OS: CentOS, macOS Ventura (M1/x86_64). See below for Qualcomm HTP specific requirements. * *Qualcomm HTP Only[^1]:* To build and run on Qualcomm's AI Engine Direct, please follow [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](build-run-qualcomm-ai-engine-direct-backend.md) for hardware and software pre-requisites. diff --git a/examples/demo-apps/apple_ios/README.md b/examples/demo-apps/apple_ios/README.md index 8de7c66f24..4b529d451a 100644 --- a/examples/demo-apps/apple_ios/README.md +++ b/examples/demo-apps/apple_ios/README.md @@ -40,8 +40,15 @@ pip --version ### 3. Getting Started Tutorial -Before proceeding, follow the [Setting Up ExecuTorch](getting-started-setup.md) -tutorial to configure the basic environment. +Before proceeding, follow the [Setting Up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) +tutorial to configure the basic environment. Feel free to skip building anything +just yet. Make sure you have all the required dependencies installed, including +the following tools: + +- Buck2 (as `/tmp/buck2`) +- Cmake (`cmake` reachable at `$PATH`) +- FlatBuffers Compiler (`flatc` reachable at `$PATH` or as `$FLATC_EXECUTABLE` + enironment variable) ### 4. Backend Dependencies diff --git a/examples/models/llama2/README.md b/examples/models/llama2/README.md index 04bbcb79fb..b91bf1f038 100644 --- a/examples/models/llama2/README.md +++ b/examples/models/llama2/README.md @@ -26,7 +26,7 @@ This example tries to reuse the Python code, with modifications to make it compa # Instructions: -1. Follow the [tutorial](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/00_setting_up_executorch.md) to set up ExecuTorch +1. Follow the [tutorial](https://pytorch.org/executorch/stable/getting-started-setup) to set up ExecuTorch 2. `cd examples/third-party/llama` 3. `pip install -e .` 4. Go back to `executorch` root, run `python3 -m examples.portable.scripts.export --model_name="llama2"`. The exported program, llama2.pte would be saved in current directory diff --git a/examples/portable/README.md b/examples/portable/README.md index dea3ab587e..cc04d8fd7e 100644 --- a/examples/portable/README.md +++ b/examples/portable/README.md @@ -20,7 +20,7 @@ We will walk through an example model to generate a `.pte` file in [portable mod from the [`models/`](../models) directory using scripts in the `portable/scripts` directory. Then we will run on the `.pte` model on the ExecuTorch runtime. For that we will use `executor_runner`. -1. Following the setup guide in [Setting up ExecuTorch from GitHub](../../docs/source/getting-started-setup.md) +1. Following the setup guide in [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) you should be able to get the basic development environment for ExecuTorch working. 2. Using the script `portable/scripts/export.py` generate a model binary file by selecting a diff --git a/examples/portable/custom_ops/README.md b/examples/portable/custom_ops/README.md index e80484fb2a..ce9d811e21 100644 --- a/examples/portable/custom_ops/README.md +++ b/examples/portable/custom_ops/README.md @@ -3,7 +3,7 @@ This folder contains examples to register custom operators into PyTorch as well ## How to run -Prerequisite: finish the [setting up wiki](../../../docs/source/getting-started-setup.md). +Prerequisite: finish the [setting up wiki](https://pytorch.org/executorch/stable/getting-started-setup). Run: diff --git a/examples/qualcomm/README.md b/examples/qualcomm/README.md index c9f8078ab8..692d25a9a8 100644 --- a/examples/qualcomm/README.md +++ b/examples/qualcomm/README.md @@ -8,7 +8,7 @@ Here are some general information and limitations. ## Prerequisite -Please finish tutorial [Setting up executorch](../../docs/source/getting-started-setup.md). +Please finish tutorial [Setting up executorch](https://pytorch.org/executorch/stable/getting-started-setup). Please finish [setup QNN backend](../../backends/qualcomm/setup.md). diff --git a/examples/sdk/README.md b/examples/sdk/README.md index b25f69bab9..c9332701fc 100644 --- a/examples/sdk/README.md +++ b/examples/sdk/README.md @@ -14,7 +14,7 @@ examples/sdk We will use an example model (in `torch.nn.Module`) and its representative inputs, both from [`models/`](../models) directory, to generate a [BundledProgram(`.bp`)](../../docs/source/sdk-bundled-io.md) file using the [script](scripts/export_bundled_program.py). Then we will use [sdk_example_runner](sdk_example_runner/sdk_example_runner.cpp) to execute the `.bp` model on the ExecuTorch runtime and verify the model on BundledProgram API. -1. Sets up the basic development environment for ExecuTorch by [Setting up ExecuTorch from GitHub](../../docs/source/getting-started-setup.md). +1. Sets up the basic development environment for ExecuTorch by [Setting up ExecuTorch from GitHub](https://pytorch.org/executorch/stable/getting-started-setup). 2. Using the [script](scripts/export_bundled_program.py) to generate a BundledProgram binary file by retreiving a `torch.nn.Module` model and its representative inputs from the list of available models in the [`models/`](../models) dir。 diff --git a/examples/selective_build/README.md b/examples/selective_build/README.md index f5576eb288..94acf45858 100644 --- a/examples/selective_build/README.md +++ b/examples/selective_build/README.md @@ -3,7 +3,7 @@ To optimize binary size of ExecuTorch runtime, selective build can be used. This ## How to run -Prerequisite: finish the [setting up wiki](../../docs/source/getting-started-setup.md). +Prerequisite: finish the [setting up wiki](https://pytorch.org/executorch/stable/getting-started-setup). Run: