From 2de6cb0605681db39314818fdddd54a33cc426b0 Mon Sep 17 00:00:00 2001 From: Hansong <107070759+kirklandsign@users.noreply.github.com> Date: Tue, 23 Jul 2024 15:55:35 -0400 Subject: [PATCH] Delete examples/models/phi-3-mini/README.md (#4379) --- examples/models/phi-3-mini/README.md | 28 ---------------------------- 1 file changed, 28 deletions(-) delete mode 100644 examples/models/phi-3-mini/README.md diff --git a/examples/models/phi-3-mini/README.md b/examples/models/phi-3-mini/README.md deleted file mode 100644 index af7ae912ea..0000000000 --- a/examples/models/phi-3-mini/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# Summary -This example demonstrates how to run a [Phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) 3.8B model via ExecuTorch. We use XNNPACK to accelarate the performance and XNNPACK symmetric per channel quantization. - -# Instructions -## Step 1: Setup -1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_requirements.sh --pybind xnnpack` -2. Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.0.dev0) of transformers. Make sure that you install transformers with version at least 4.41.0: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers` - - -## Step 2: Prepare and run the model -1. Download the `tokenizer.model` from HuggingFace. -``` -cd examples/models/phi-3-mini -wget -O tokenizer.model https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/tokenizer.model?download=true -``` -2. Export the model. This step will take a few minutes to finish. -``` -python export_model.py -``` -3. Build and run the runner. -``` -mkdir cmake-out -cd cmake-out -cmake .. -cd .. -cmake --build cmake-out -j10 -./cmake-out/phi_3_mini_runner -```