Skip to content

Commit

Permalink
Update readme.
Browse files Browse the repository at this point in the history
Summary: .

Reviewed By: cccclai

Differential Revision: D56532283

fbshipit-source-id: 62d7c9e8583fdb5c9a1b2e781e80799c06682aae
  • Loading branch information
shoumikhin authored and facebook-github-bot committed Apr 24, 2024
1 parent b669056 commit ce1e9c1
Showing 1 changed file with 22 additions and 5 deletions.
27 changes: 22 additions & 5 deletions examples/demo-apps/apple_ios/LLaMA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,19 @@ This app demonstrates the use of the LLaMA chat app demonstrating local inferenc
<img src="../_static/img/llama_ios_app.png" alt="iOS LLaMA App" /><br>

## Prerequisites
* [Xcode 15](https://developer.apple.com/xcode).
* [iOS 17 SDK](https://developer.apple.com/ios).
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment.
* [Xcode 15](https://developer.apple.com/xcode)
* [iOS 17 SDK](https://developer.apple.com/ios)
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:

```bash
git clone -b release/0.2 https://github.com/pytorch/executorch.git
cd executorch
git submodule update --init

python3 -m venv .venv && source .venv/bin/activate

./install_requirements.sh
```

## Exporting models
Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md) to export the model.
Expand All @@ -16,10 +26,11 @@ Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executor

1. Open the [project](https://github.com/pytorch/executorch/blob/main/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj) in Xcode.
2. Run the app (cmd+R).
3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton as on the [video](../_static/img/llama_ios_app.mp4).
3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton

```{note}
ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets. Xcode will dowload and cache the package on the first run, which will take some time.
ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets.
Xcode will dowload and cache the package on the first run, which will take some time.
```

## Copy the model to Simulator
Expand All @@ -33,5 +44,11 @@ ExecuTorch runtime is distributed as a Swift package providing some .xcframework
2. Navigate to the Files tab and drag&drop the model and tokenizer files onto the iLLaMA folder.
3. Wait until the files are copied.

Click the image below to see it in action!

<a href="https://pytorch.org/executorch/main/_static/img/llama_ios_app.mp4">
<img src="https://pytorch.org/executorch/main/_static/img/llama_ios_app.png" width="600" alt="iOS app running a LlaMA model">
</a>

## Reporting Issues
If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new).

0 comments on commit ce1e9c1

Please sign in to comment.