Skip to content

Commit

Permalink
L3p/readme prompt updates (#440)
Browse files Browse the repository at this point in the history
  • Loading branch information
HamidShojanazeri authored Apr 18, 2024
2 parents 0f7d588 + b49f31c commit d0d36d2
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 5 deletions.
22 changes: 22 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,28 @@
<!-- markdown-link-check-disable -->
The 'llama-recipes' repository is a companion to the [Meta Llama 2](https://github.com/meta-llama/llama) and [Meta Llama 3](https://github.com/meta-llama/llama3) models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem. The examples here showcase how to run Meta Llama locally, in the cloud, and on-prem.
<!-- markdown-link-check-enable -->
> [!IMPORTANT]
> Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer).
> | Token | Description |
> |---|---|
> `<\|begin_of_text\|>` | This is equivalent to the BOS token. |
> `<\|eot_id\|>` | This signifies the end of the message in a turn. |
> `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. |
> `<\|end_of_text\|>` | This is equivalent to the EOS token. On generating this token, Llama 3 will cease to generate more tokens |
>
> A multiturn-conversation with Llama 3 follows this prompt template:
> ```
> <|begin_of_text|><|start_header_id|>system<|end_header_id|>
>
> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
>
> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
>
> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
>
> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
> ```
> More details on the new tokenizer and prompt template: <PLACEHOLDER_URL>
> [!NOTE]
> The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted.
>
Expand Down
10 changes: 5 additions & 5 deletions recipes/responsible_ai/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Llama Guard
# Meta Llama Guard

Llama Guard is a new experimental model that provides input and output guardrails for LLM deployments. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard).
Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard2).

**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/LlamaGuard-7b).
**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B).

### Running locally
The [llama_guard](llama_guard) folder contains the inference script to run Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.

### Running on the cloud
The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Llama Guard on cloud hosted endpoints.
The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
5 changes: 5 additions & 0 deletions scripts/spellcheck_conf/wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1289,3 +1289,8 @@ tokenize
tokenizer's
tokenizers
warmup
BOS
EOS
eot
multiturn
tiktoken

0 comments on commit d0d36d2

Please sign in to comment.