Skip to content

Commit

Permalink
Fix more package naming (#864)
Browse files Browse the repository at this point in the history
  • Loading branch information
init27 authored Jan 22, 2025
2 parents e132a24 + 6a3bce7 commit 46796d5
Show file tree
Hide file tree
Showing 28 changed files with 138 additions and 135 deletions.
4 changes: 2 additions & 2 deletions 3p-integrations/crusoe/vllm-fp8/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ source $HOME/.cargo/env

Now, clone the recipes and navigate to this tutorial. Initialize the virtual environment and install dependencies:
```bash
git clone https://github.com/meta-llama/llama-recipes.git
cd llama-recipes/recipes/3p_integrations/crusoe/vllm-fp8/
git clone https://github.com/meta-llama/llama-cookbook.git
cd llama-cookbook/recipes/3p_integrations/crusoe/vllm-fp8/
uv add vllm setuptools
```

Expand Down
2 changes: 1 addition & 1 deletion 3p-integrations/llama_on_prem.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Llama 3 On-Prem Inference Using vLLM and TGI

Enterprise customers may prefer to deploy Llama 3 on-prem and run Llama in their own servers. This tutorial shows how to use Llama 3 with [vLLM](https://github.com/vllm-project/vllm) and Hugging Face [TGI](https://github.com/huggingface/text-generation-inference), two leading open-source tools to deploy and serve LLMs, and how to create vLLM and TGI hosted Llama 3 instances with [LangChain](https://www.langchain.com/), an open-source LLM app development framework which we used for our other demo apps: [Getting to Know Llama](../getting-started/build_with_Llama_3_2.ipynb), Running Llama 3 <!-- markdown-link-check-disable -->[locally](https://github.com/meta-llama/llama-recipes/blob/main/recipes/quickstart/Running_Llama3_Anywhere/Running_Llama_on_Mac_Windows_Linux.ipynb) <!-- markdown-link-check-disable --> and [in the cloud](https://github.com/meta-llama/llama-recipes/blob/main/recipes/quickstart/RAG/hello_llama_cloud.ipynb). See [here](https://medium.com/@rohit.k/tgi-vs-vllm-making-informed-choices-for-llm-deployment-37c56d7ff705) for a detailed comparison of vLLM and TGI.
Enterprise customers may prefer to deploy Llama 3 on-prem and run Llama in their own servers. This tutorial shows how to use Llama 3 with [vLLM](https://github.com/vllm-project/vllm) and Hugging Face [TGI](https://github.com/huggingface/text-generation-inference), two leading open-source tools to deploy and serve LLMs, and how to create vLLM and TGI hosted Llama 3 instances with [LangChain](https://www.langchain.com/), an open-source LLM app development framework which we used for our other demo apps: [Getting to Know Llama](../getting-started/build_with_Llama_3_2.ipynb), Running Llama 3 <!-- markdown-link-check-disable -->[locally](https://github.com/meta-llama/llama-cookbook/blob/main/recipes/quickstart/Running_Llama3_Anywhere/Running_Llama_on_Mac_Windows_Linux.ipynb) <!-- markdown-link-check-disable --> and [in the cloud](https://github.com/meta-llama/llama-cookbook/blob/main/recipes/quickstart/RAG/hello_llama_cloud.ipynb). See [here](https://medium.com/@rohit.k/tgi-vs-vllm-making-informed-choices-for-llm-deployment-37c56d7ff705) for a detailed comparison of vLLM and TGI.

For [Ollama](https://ollama.com) based on-prem inference with Llama 3, see the Running Llama 3 locally notebook above.

Expand Down
2 changes: 1 addition & 1 deletion 3p-integrations/tgi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In case the model was fine tuned with LoRA method we need to merge the weights o
The script takes the base model, the peft weight folder as well as an output as arguments:

```
python -m llama_recipes.recipes.3p_integration.tgi.merge_lora_weights --base_model llama-7B --peft_model ft_output --output_dir data/merged_model_output
python -m llama_cookbook.recipes.3p_integration.tgi.merge_lora_weights --base_model llama-7B --peft_model ft_output --output_dir data/merged_model_output
```

## Step 1: Serving the model
Expand Down
121 changes: 62 additions & 59 deletions 3p-integrations/using_externally_hosted_llms.ipynb

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions end-to-end-use-cases/RAFT-Chatbot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ As shown in the above example, we have a "question" section for the generated qu
To create a reliable evaluation set, it's ideal to use human-annotated question and answer pairs. This ensures that the questions are relevant and the answers are accurate. However, human annotation is time-consuming and costly. For demonstration purposes, we'll use a subset of the validation set, which will never be used in the fine-tuning. We only need to keep the "question" section and the final answer section, marked by the `<ANSWER>` tag in "cot_answer". We'll manually check each example and select only the good ones. We want to ensure that the questions are general enough to be used for web search engine queries and are related to Llama. We'll also use some QA pairs from our FAQ page, with modifications. This will result in 72 question and answer pairs as our evaluation set, saved as `eval_llama.json`.

## Fine-Tuning Steps
Once the RAFT dataset is ready in JSON format, we can start fine-tuning. Unfortunately, the LORA method didn't produce good results, so we'll use the full fine-tuning method. We can use the following commands as an example in the llama-recipes main folder:
Once the RAFT dataset is ready in JSON format, we can start fine-tuning. Unfortunately, the LORA method didn't produce good results, so we'll use the full fine-tuning method. We can use the following commands as an example in the llama-cookbook main folder:

```bash
export PATH_TO_ROOT_FOLDER=./raft-8b
Expand All @@ -129,7 +129,7 @@ For more details on multi-GPU fine-tuning, please refer to the [multigpu_finetun
Next, we need to convert the FSDP checkpoint to a HuggingFace checkpoint using the following command:

```bash
python src/llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path "$PATH_TO_ROOT_FOLDER/fine-tuned-meta-Llama/Meta-Llama-3-8B-Instruct" --consolidated_model_path "$PATH_TO_ROOT_FOLDER"
python src/llama_cookbook/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path "$PATH_TO_ROOT_FOLDER/fine-tuned-meta-Llama/Meta-Llama-3-8B-Instruct" --consolidated_model_path "$PATH_TO_ROOT_FOLDER"
```

For more details on FSDP to HuggingFace checkpoint conversion, please refer to the [readme](../../getting-started/finetuning/multigpu_finetuning.md) in the inference/local_inference recipe.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ Given those differences, the numbers from this recipe can not be compared to the
Please install lm-evaluation-harness and our llama-recipe repo by following:

```
git clone git@github.com:meta-llama/llama-recipes.git
cd llama-recipes
git clone git@github.com:meta-llama/llama-cookbook.git
cd llama-cookbook
pip install -U pip setuptools
pip install -e .
pip install lm-eval[math,ifeval,sentencepiece,vllm]==0.4.3
Expand Down
2 changes: 1 addition & 1 deletion end-to-end-use-cases/coding/text2sql/quickstart.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"cells": [
"cells": [llama-cookbook
{
"cell_type": "markdown",
"id": "e8cba0b6",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@
"In this example, we will be deploying a Meta Llama 3 8B chat HuggingFace model with the Text-generation-inference framework on-permises. \n",
"This would allow us to directly wire the API server with our chatbot. \n",
"There are alternative solutions to deploy Meta Llama 3 models on-permises as your local API server. \n",
"You can find our complete guide [here](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/model_servers/llama-on-prem.md)."
"You can find our complete guide [here](https://github.com/meta-llama/llama-cookbook/blob/main/recipes/inference/model_servers/llama-on-prem.md)."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion end-to-end-use-cases/github_triage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ pip install -r requirements.txt
### Running the Tool

```bash
python triage.py --repo_name='meta-llama/llama-recipes' --start_date='2024-08-14' --end_date='2024-08-27'
python triage.py --repo_name='meta-llama/llama-cookbook' --start_date='2024-08-14' --end_date='2024-08-27'
```

### Output
Expand Down
2 changes: 1 addition & 1 deletion end-to-end-use-cases/github_triage/walkthrough.ipynb
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
{llama-cookbookllama-cookbook
"cells": [
{
"cell_type": "code",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@
"import os\n",
"import getpass\n",
"\n",
"from llama_recipes.inference.llm import TOGETHER, OPENAI, ANYSCALE\n",
"from llama_cookbook.inference.llm import TOGETHER, OPENAI, ANYSCALE\n",
"\n",
"if \"EXTERNALLY_HOSTED_LLM_TOKEN\" not in os.environ:\n",
" os.environ[\"EXTERNALLY_HOSTED_LLM_TOKEN\"] = getpass.getpass(prompt=\"Provide token for LLM provider\")\n",
Expand Down
2 changes: 1 addition & 1 deletion end-to-end-use-cases/responsible_ai/llama_guard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This [notebook](llama_guard_text_and_vision_inference.ipynb) shows how to load t

## Requirements
1. Access to Llama guard model weights on Hugging Face. To get access, follow the steps described in the top of the model card in [Hugging Face](https://huggingface.co/meta-llama/Llama-Guard-3-1B)
2. Llama recipes package and its dependencies [installed](https://github.com/meta-llama/llama-recipes?tab=readme-ov-file#installing)
2. Llama recipes package and its dependencies [installed](https://github.com/meta-llama/llama-cookbook?tab=readme-ov-file#installing)
3. Pillow package installed

## Inference Safety Checker
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"\n",
"Llama Guard is provided with a reference taxonomy explained on [this page](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-3), where the prompting format is also explained. \n",
"\n",
"The functions below combine already existing [prompt formatting code in llama-recipes](https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/inference/prompt_format_utils.py) with custom code to aid in the custimization of the taxonomy. "
"The functions below combine already existing [prompt formatting code in llama-recipes](https://github.com/meta-llama/llama-recipes/blob/main/src/llama_cookbook/inference/prompt_format_utils.py) with custom code to aid in the custimization of the taxonomy. "
]
},
{
Expand Down Expand Up @@ -80,7 +80,7 @@
],
"source": [
"from enum import Enum\n",
"from llama_recipes.inference.prompt_format_utils import LLAMA_GUARD_3_CATEGORY, SafetyCategory, AgentType\n",
"from llama_cookbook.inference.prompt_format_utils import LLAMA_GUARD_3_CATEGORY, SafetyCategory, AgentType\n",
"from typing import List\n",
"\n",
"class LG3Cat(Enum):\n",
Expand Down Expand Up @@ -158,7 +158,7 @@
}
],
"source": [
"from llama_recipes.inference.prompt_format_utils import build_custom_prompt, create_conversation, PROMPT_TEMPLATE_3, LLAMA_GUARD_3_CATEGORY_SHORT_NAME_PREFIX\n",
"from llama_cookbook.inference.prompt_format_utils import build_custom_prompt, create_conversation, PROMPT_TEMPLATE_3, LLAMA_GUARD_3_CATEGORY_SHORT_NAME_PREFIX\n",
"from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
"from typing import List, Tuple\n",
"from enum import Enum\n",
Expand Down Expand Up @@ -463,13 +463,13 @@
"\n",
"To add additional datasets\n",
"\n",
"1. Copy llama-recipes/src/llama_recipes/datasets/toxicchat_dataset.py \n",
"1. Copy llama-recipes/src/llama_cookbook/datasets/toxicchat_dataset.py \n",
"2. Modify the file to change the dataset used\n",
"3. Add references to the new dataset in \n",
" - llama-recipes/src/llama_recipes/configs/datasets.py\n",
" - llama_recipes/datasets/__init__.py\n",
" - llama_recipes/datasets/toxicchat_dataset.py\n",
" - llama_recipes/utils/dataset_utils.py\n",
" - llama-recipes/src/llama_cookbook/configs/datasets.py\n",
" - llama_cookbook/datasets/__init__.py\n",
" - llama_cookbook/datasets/toxicchat_dataset.py\n",
" - llama_cookbook/utils/dataset_utils.py\n",
"\n",
"\n",
"## Evaluation\n",
Expand All @@ -484,7 +484,7 @@
"source": [
"from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
"\n",
"from llama_recipes.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion\n",
"from llama_cookbook.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion\n",
"from llama.llama.generation import Llama\n",
"\n",
"from typing import List, Optional, Tuple, Dict\n",
Expand Down Expand Up @@ -726,7 +726,7 @@
"# \"unsafe_content\": [\"O1\"]\n",
"# }\n",
"# ```\n",
"from llama_recipes.datasets.toxicchat_dataset import get_llamaguard_toxicchat_dataset\n",
"from llama_cookbook.datasets.toxicchat_dataset import get_llamaguard_toxicchat_dataset\n",
"validation_data = get_llamaguard_toxicchat_dataset(None, None, \"train\", return_jsonl = True)[0:100]\n",
"run_validation(validation_data, AgentType.USER, Type.HF, load_in_8bit = False, load_in_4bit = True)"
]
Expand Down Expand Up @@ -757,7 +757,7 @@
"outputs": [],
"source": [
"model_id = \"meta-llama/Llama-Guard-3-8B\"\n",
"from llama_recipes import finetuning\n",
"from llama_cookbook import finetuning\n",
"\n",
"finetuning.main(\n",
" model_name = model_id,\n",
Expand Down
2 changes: 1 addition & 1 deletion end-to-end-use-cases/responsible_ai/prompt_guard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ This is a very small model and inference and fine-tuning are feasible on local C

## Requirements
1. Access to Prompt Guard model weights on Hugging Face. To get access, follow the steps described [here](https://github.com/facebookresearch/PurpleLlama/tree/main/Prompt-Guard#download)
2. Llama recipes package and it's dependencies [installed](https://github.com/meta-llama/llama-recipes?tab=readme-ov-file#installing)
2. Llama recipes package and it's dependencies [installed](https://github.com/meta-llama/llama-cookbook?tab=readme-ov-file#installing)
2 changes: 1 addition & 1 deletion getting-started/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Llama-Recipes Getting Started
## Llama-cookbook Getting Started

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

Expand Down
2 changes: 1 addition & 1 deletion getting-started/finetuning/finetune_vision_model.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## Llama 3.2 Vision Models Fine-Tuning Recipe
This recipe steps you through how to finetune a Llama 3.2 vision model on the OCR VQA task using the [OCRVQA](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron/viewer/ocrvqa?row=0) dataset.

**Disclaimer**: As our vision models already have a very good OCR ability, here we use the OCRVQA dataset only for demonstration purposes of the required steps for fine-tuning our vision models with llama-recipes.
**Disclaimer**: As our vision models already have a very good OCR ability, here we use the OCRVQA dataset only for demonstration purposes of the required steps for fine-tuning our vision models with llama-cookbook.

### Fine-tuning steps

Expand Down
2 changes: 1 addition & 1 deletion getting-started/finetuning/finetuning.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.

import fire
from llama_recipes.finetuning import main
from llama_cookbook.finetuning import main

if __name__ == "__main__":
fire.Fire(main)
18 changes: 9 additions & 9 deletions getting-started/finetuning/quickstart_peft_finetuning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,17 +31,17 @@
"source": [
"### Step 0: Install pre-requirements and convert checkpoint\n",
"\n",
"We need to have llama-recipes and its dependencies installed for this notebook. Additionally, we need to log in with the huggingface_cli and make sure that the account is able to to access the Meta Llama weights."
"We need to have llama-cookbook and its dependencies installed for this notebook. Additionally, we need to log in with the huggingface_cli and make sure that the account is able to to access the Meta Llama weights."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# uncomment if running from Colab T4\n",
"# ! pip install llama-recipes ipywidgets\n",
"# ! pip install llama-cookbook ipywidgets\n",
"\n",
"# import huggingface_hub\n",
"# huggingface_hub.login()"
Expand All @@ -59,7 +59,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [
{
Expand All @@ -80,7 +80,7 @@
"source": [
"import torch\n",
"from transformers import LlamaForCausalLM, AutoTokenizer\n",
"from llama_recipes.configs import train_config as TRAIN_CONFIG\n",
"from llama_cookbook.configs import train_config as TRAIN_CONFIG\n",
"\n",
"train_config = TRAIN_CONFIG()\n",
"train_config.model_name = \"meta-llama/Meta-Llama-3.1-8B\"\n",
Expand Down Expand Up @@ -221,8 +221,8 @@
"metadata": {},
"outputs": [],
"source": [
"from llama_recipes.configs.datasets import samsum_dataset\n",
"from llama_recipes.utils.dataset_utils import get_dataloader\n",
"from llama_cookbook.configs.datasets import samsum_dataset\n",
"from llama_cookbook.utils.dataset_utils import get_dataloader\n",
"\n",
"samsum_dataset.trust_remote_code = True\n",
"\n",
Expand All @@ -248,7 +248,7 @@
"source": [
"from peft import get_peft_model, prepare_model_for_kbit_training, LoraConfig\n",
"from dataclasses import asdict\n",
"from llama_recipes.configs import lora_config as LORA_CONFIG\n",
"from llama_cookbook.configs import lora_config as LORA_CONFIG\n",
"\n",
"lora_config = LORA_CONFIG()\n",
"lora_config.r = 8\n",
Expand Down Expand Up @@ -278,7 +278,7 @@
"outputs": [],
"source": [
"import torch.optim as optim\n",
"from llama_recipes.utils.train_utils import train\n",
"from llama_cookbook.utils.train_utils import train\n",
"from torch.optim.lr_scheduler import StepLR\n",
"\n",
"model.train()\n",
Expand Down
6 changes: 3 additions & 3 deletions getting-started/inference/local_inference/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@
import torch

from accelerate.utils import is_xpu_available
from llama_recipes.inference.model_utils import load_model, load_peft_model
from llama_cookbook.inference.model_utils import load_model, load_peft_model

from llama_recipes.inference.safety_utils import AgentType, get_safety_checker
from llama_cookbook.inference.safety_utils import AgentType, get_safety_checker
from transformers import AutoTokenizer


Expand Down Expand Up @@ -176,7 +176,7 @@ def inference(
)
],
title="Meta Llama3 Playground",
description="https://github.com/meta-llama/llama-recipes",
description="https://github.com/meta-llama/llama-cookbook",
).queue().launch(server_name="0.0.0.0", share=share_gradio)


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Connect your phone to your development machine. On OSX, you'll be prompted on th

## Building the Android Package with MLC

First edit the file under `android/MLCChat/mlc-package-config.json` and with the [mlc-package-config.json](./mlc-package-config.json) in llama-recipes.
First edit the file under `android/MLCChat/mlc-package-config.json` and with the [mlc-package-config.json](./mlc-package-config.json) in llama-cookbook.

To understand what these JSON fields mean you can refer to this [documentation](https://llm.mlc.ai/docs/deploy/android.html#step-2-build-runtime-and-model-libraries).

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "hatchling.build"

[project]
name = "llama-cookbook"
version = "0.0.5"
version = "0.0.5.post1"
authors = [
{ name="Hamid Shojanazeri", email="hamidnazeri@meta.com" },
{ name="Matthias Reso", email="mreso@meta.com" },
Expand Down
8 changes: 4 additions & 4 deletions src/llama_cookbook/configs/datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,16 @@ class samsum_dataset:
@dataclass
class grammar_dataset:
dataset: str = "grammar_dataset"
train_split: str = "src/llama_recipes/datasets/grammar_dataset/gtrain_10k.csv"
test_split: str = "src/llama_recipes/datasets/grammar_dataset/grammar_validation.csv"
train_split: str = "src/llama_cookbook/datasets/grammar_dataset/gtrain_10k.csv"
test_split: str = "src/llama_cookbook/datasets/grammar_dataset/grammar_validation.csv"


@dataclass
class alpaca_dataset:
dataset: str = "alpaca_dataset"
train_split: str = "train"
test_split: str = "val"
data_path: str = "src/llama_recipes/datasets/alpaca_data.json"
data_path: str = "src/llama_cookbook/datasets/alpaca_data.json"

@dataclass
class custom_dataset:
Expand All @@ -32,7 +32,7 @@ class custom_dataset:
train_split: str = "train"
test_split: str = "validation"
data_path: str = ""

@dataclass
class llamaguard_toxicchat_dataset:
dataset: str = "llamaguard_toxicchat_dataset"
Expand Down
4 changes: 2 additions & 2 deletions src/llama_cookbook/configs/wandb.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@

@dataclass
class wandb_config:
project: str = 'llama_recipes' # wandb project name
project: str = 'llama_cookbook' # wandb project name
entity: Optional[str] = None # wandb entity name
job_type: Optional[str] = None
tags: Optional[List[str]] = None
group: Optional[str] = None
notes: Optional[str] = None
mode: Optional[str] = None
mode: Optional[str] = None
Loading

0 comments on commit 46796d5

Please sign in to comment.