diff --git a/README.md b/README.md index 398e31b6f0..c2e0f4731e 100644 --- a/README.md +++ b/README.md @@ -50,13 +50,14 @@ Features: ## 🚀 Quick Start **Requirements**: + - NVIDIA GPU (Ampere or newer for `bf16` and Flash Attention) or AMD GPU - Python 3.11 - PyTorch ≥2.4.1 ### Installation -```shell +```bash pip3 install --no-build-isolation axolotl[flash-attn,deepspeed] # Download example axolotl configs, deepspeed configs @@ -68,7 +69,7 @@ Other installation approaches are described [here](https://axolotl-ai-cloud.gith ### Your First Fine-tune -```shell +```bash # Fetch axolotl examples axolotl fetch examples diff --git a/_quarto.yml b/_quarto.yml index 3ec1ce75b6..ddd172370f 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -3,10 +3,12 @@ project: website: title: "Axolotl" - description: "Fine-tuning" + description: "We make fine-tuning accessible, scalable, and fun" favicon: favicon.jpg + navbar: - title: Axolotl + logo: image/axolotl_logo_digital_white.svg + title: false background: dark pinned: false collapse: false @@ -25,33 +27,58 @@ website: contents: - text: Home href: index.qmd - - section: "How-To Guides" + + - section: "Getting Started" contents: - # TODO Edit folder structure after we have more docs. - docs/getting-started.qmd - docs/installation.qmd - - docs/debugging.qmd + - docs/cli.qmd - docs/inference.qmd - - docs/multipack.qmd - - docs/fsdp_qlora.qmd - - docs/input_output.qmd - - docs/rlhf.qmd - - docs/nccl.qmd - - docs/mac.qmd + + - section: "Dataset Formats" + contents: docs/dataset-formats/* + + - section: "Deployments" + contents: - docs/multi-gpu.qmd - docs/multi-node.qmd - - docs/unsloth.qmd - - docs/amd_hpc.qmd - docs/ray-integration.qmd - - section: "Dataset Formats" - contents: docs/dataset-formats/* + - docs/amd_hpc.qmd + - docs/mac.qmd + + - section: "How To Guides" + contents: + - docs/multimodal.qmd + - docs/rlhf.qmd + - docs/reward_modelling.qmd + - docs/lr_groups.qmd + - docs/lora_optims.qmd + + - section: "Core Concepts" + contents: + - docs/batch_vs_grad.qmd + - docs/dataset_preprocessing.qmd + - docs/multipack.qmd + + - section: "Advanced Features" + contents: + - docs/fsdp_qlora.qmd + - docs/unsloth.qmd + - docs/torchao.qmd + - docs/custom_integrations.qmd + + - section: "Troubleshooting" + contents: + - docs/faq.qmd + - docs/debugging.qmd + - docs/nccl.qmd + - section: "Reference" contents: - docs/config.qmd - - docs/faq.qmd format: html: - theme: materia + theme: darkly css: styles.css toc: true diff --git a/docs/amd_hpc.qmd b/docs/amd_hpc.qmd index 70fbe88ee3..c6dbe82d07 100644 --- a/docs/amd_hpc.qmd +++ b/docs/amd_hpc.qmd @@ -1,5 +1,5 @@ --- -title: Training with AMD GPUs on HPC Systems +title: AMD GPUs on HPC Systems description: A comprehensive guide for using Axolotl on distributed systems with AMD GPUs --- diff --git a/docs/cli.qmd b/docs/cli.qmd index 5b494ab5de..a57e54d9a9 100644 --- a/docs/cli.qmd +++ b/docs/cli.qmd @@ -1,28 +1,19 @@ -# Axolotl CLI Documentation +--- +title: "CLI Reference" +format: + html: + toc: true + toc-expand: 2 + toc-depth: 3 +execute: + enabled: false +--- The Axolotl CLI provides a streamlined interface for training and fine-tuning large language models. This guide covers the CLI commands, their usage, and common examples. -### Table of Contents -- Basic Commands -- Command Reference - - fetch - - preprocess - - train - - inference - - merge-lora - - merge-sharded-fsdp-weights - - evaluate - - lm-eval -- Legacy CLI Usage -- Remote Compute with Modal Cloud - - Cloud Configuration - - Running on Modal Cloud - - Cloud Configuration Options - - -### Basic Commands +## Basic Commands All Axolotl commands follow this general structure: @@ -32,9 +23,9 @@ axolotl [config.yml] [options] The config file can be local or a URL to a raw YAML file. -### Command Reference +## Command Reference -#### fetch +### fetch Downloads example configurations and deepspeed configs to your local machine. @@ -49,7 +40,7 @@ axolotl fetch deepspeed_configs axolotl fetch examples --dest path/to/folder ``` -#### preprocess +### preprocess Preprocesses and tokenizes your dataset before training. This is recommended for large datasets. @@ -74,7 +65,7 @@ dataset_prepared_path: Local folder for saving preprocessed data push_dataset_to_hub: HuggingFace repo to push preprocessed data (optional) ``` -#### train +### train Trains or fine-tunes a model using the configuration specified in your YAML file. @@ -95,7 +86,38 @@ axolotl train config.yml --no-accelerate axolotl train config.yml --resume-from-checkpoint path/to/checkpoint ``` -#### inference +It is possible to run sweeps over multiple hyperparameters by passing in a sweeps config. + +```bash +# Basic training with sweeps +axolotl train config.yml --sweep path/to/sweep.yaml +``` + +Example sweep config: +```yaml +_: + # This section is for dependent variables we need to fix + - load_in_8bit: false + load_in_4bit: false + adapter: lora + - load_in_8bit: true + load_in_4bit: false + adapter: lora + +# These are independent variables +learning_rate: [0.0003, 0.0006] +lora_r: + - 16 + - 32 +lora_alpha: + - 16 + - 32 + - 64 +``` + + + +### inference Runs inference using your trained model in either CLI or Gradio interface mode. @@ -115,7 +137,7 @@ cat prompt.txt | axolotl inference config.yml \ --base-model="./completed-model" ``` -#### merge-lora +### merge-lora Merges trained LoRA adapters into the base model. @@ -137,7 +159,7 @@ gpu_memory_limit: Limit GPU memory usage lora_on_cpu: Load LoRA weights on CPU ``` -#### merge-sharded-fsdp-weights +### merge-sharded-fsdp-weights Merges sharded FSDP model checkpoints into a single combined checkpoint. @@ -146,7 +168,7 @@ Merges sharded FSDP model checkpoints into a single combined checkpoint. axolotl merge-sharded-fsdp-weights config.yml ``` -#### evaluate +### evaluate Evaluates a model's performance using metrics specified in the config. @@ -155,27 +177,27 @@ Evaluates a model's performance using metrics specified in the config. axolotl evaluate config.yml ``` -#### lm-eval +### lm-eval Runs LM Evaluation Harness on your model. ```bash # Basic evaluation axolotl lm-eval config.yml - -# Evaluate specific tasks -axolotl lm-eval config.yml --tasks arc_challenge,hellaswag ``` Configuration options: ```yaml -lm_eval_tasks: List of tasks to evaluate -lm_eval_batch_size: Batch size for evaluation -output_dir: Directory to save evaluation results +# List of tasks to evaluate +lm_eval_tasks: + - arc_challenge + - hellaswag +lm_eval_batch_size: # Batch size for evaluation +output_dir: # Directory to save evaluation results ``` -### Legacy CLI Usage +## Legacy CLI Usage While the new Click-based CLI is preferred, Axolotl still supports the legacy module-based CLI: @@ -195,12 +217,18 @@ accelerate launch -m axolotl.cli.inference config.yml \ --lora_model_dir="./outputs/lora-out" --gradio ``` -### Remote Compute with Modal Cloud +::: {.callout-important} +When overriding CLI parameters in the legacy CLI, use same notation as in yaml file (e.g., `--lora_model_dir`). + +**Note:** This differs from the new Click-based CLI, which uses dash notation (e.g., `--lora-model-dir`). Keep this in mind if you're referencing newer documentation or switching between CLI versions. +::: + +## Remote Compute with Modal Cloud Axolotl supports running training and inference workloads on Modal cloud infrastructure. This is configured using a cloud YAML file alongside your regular Axolotl config. -#### Cloud Configuration +### Cloud Configuration Create a cloud config YAML with your Modal settings: @@ -215,13 +243,17 @@ branch: main # Git branch to use (optional) volumes: # Persistent storage volumes - name: axolotl-cache mount: /workspace/cache + - name: axolotl-data + mount: /workspace/data + - name: axolotl-artifacts + mount: /workspace/artifacts env: # Environment variables - WANDB_API_KEY - HF_TOKEN ``` -#### Running on Modal Cloud +### Running on Modal Cloud Commands that support the --cloud flag: @@ -239,18 +271,18 @@ axolotl train config.yml --cloud cloud_config.yml --no-accelerate axolotl lm-eval config.yml --cloud cloud_config.yml ``` -#### Cloud Configuration Options +### Cloud Configuration Options ```yaml -provider: compute provider, currently only `modal` is supported -gpu: GPU type to use -gpu_count: Number of GPUs (default: 1) -memory: RAM in GB (default: 128) -timeout: Maximum runtime in seconds -timeout_preprocess: Preprocessing timeout -branch: Git branch to use -docker_tag: Custom Docker image tag -volumes: List of persistent storage volumes -env: Environment variables to pass -secrets: Secrets to inject +provider: # compute provider, currently only `modal` is supported +gpu: # GPU type to use +gpu_count: # Number of GPUs (default: 1) +memory: # RAM in GB (default: 128) +timeout: # Maximum runtime in seconds +timeout_preprocess: # Preprocessing timeout +branch: # Git branch to use +docker_tag: # Custom Docker image tag +volumes: # List of persistent storage volumes +env: # Environment variables to pass +secrets: # Secrets to inject ``` diff --git a/docs/custom_integrations.qmd b/docs/custom_integrations.qmd new file mode 100644 index 0000000000..8d04982986 --- /dev/null +++ b/docs/custom_integrations.qmd @@ -0,0 +1,57 @@ +--- +title: Custom Integrations +toc: true +toc-depth: 3 +--- + +```{python} +#| echo: false + +import re + +def process_readme(integration_name): + try: + path = f'../src/axolotl/integrations/{integration_name}/README.md' + with open(path, 'r') as f: + txt = f.read() + # Remove h1 headings + txt = re.sub(r'^# .*\n?', '', txt, flags=re.MULTILINE) + # Convert h2 to h3 + txt = re.sub(r'^## ', '### ', txt, flags=re.MULTILINE) + return txt + except FileNotFoundError: + return None + +def print_section(name, folder_name): + output = f"\n## {name}\n" + content = process_readme(folder_name) + if content: + output += content + output += f"\nPlease see reference [here](https://github.com/axolotl-ai-cloud/axolotl/tree/main/src/axolotl/integrations/{folder_name})\n" + return output +``` + +```{python} +#| output: asis +#| echo: false + +# Introduction text +print(""" +Axolotl adds custom features through `integrations`. They are located within the `src/axolotl/integrations` directory. + +To enable them, please check the respective documentations. +""") + +# Sections +sections = [ + ("Cut Cross Entropy", "cut_cross_entropy"), + ("Grokfast", "grokfast"), + ("Knowledge Distillation (KD)", "kd"), + ("Liger Kernels", "liger"), + ("Language Model Evaluation Harness (LM Eval)", "lm_eval"), + ("Spectrum", "spectrum") +] + +for section_name, folder_name in sections: + print(print_section(section_name, folder_name)) +``` diff --git a/docs/dataset-formats/conversation.qmd b/docs/dataset-formats/conversation.qmd index 6866f3f408..d67e35876b 100644 --- a/docs/dataset-formats/conversation.qmd +++ b/docs/dataset-formats/conversation.qmd @@ -6,7 +6,9 @@ order: 3 ## sharegpt -IMPORTANT: ShareGPT is deprecated!. Please see [chat_template](#chat_template) section below. +::: {.callout-important} +ShareGPT is deprecated!. Please see [chat_template](#chat_template) section below. +::: ## pygmalion diff --git a/docs/dataset-formats/index.qmd b/docs/dataset-formats/index.qmd index a46f466048..4275858f62 100644 --- a/docs/dataset-formats/index.qmd +++ b/docs/dataset-formats/index.qmd @@ -13,7 +13,7 @@ As there are a lot of available options in Axolotl, this guide aims to provide a Axolotl supports 3 kinds of training methods: pre-training, supervised fine-tuning, and preference-based post-training (e.g. DPO, ORPO, PRMs). Each method has their own dataset format which are described below. -## [Pre-training](pretraining.qmd) +## Pre-training When aiming to train on large corpora of text datasets, pre-training is your go-to choice. Due to the size of these datasets, downloading the entire-datasets before beginning training would be prohibitively time-consuming. Axolotl supports [streaming](https://huggingface.co/docs/datasets/en/stream) to only load batches into memory at a time. @@ -96,6 +96,10 @@ One step is equal to `sequence_len * micro_batch_size * gradient_accumulation_st It is recommended to leave this off if downloading from Hugging Face hub as it would download the entire dataset which can be very large. +### Reference + +Please see docs [here](pretraining.qmd). + ## Supervised fine-tuning (SFT) Supervised fine-tuning is the process of training models to respond to an instruction or chat input. @@ -120,7 +124,7 @@ If you went through the flow chart and did not find one that matches, it is reco You can mix and match within each approach or across approaches to train a model on a variety of datasets. ::: -### [Pre-Tokenized Dataset](tokenized.qmd) +### Pre-Tokenized Dataset We suggest this approach when you want to bring your own tokenized dataset. @@ -145,7 +149,9 @@ datasets: `type: ` is empty! ::: -### [Template Free Dataset](template_free.qmd) +Reference: [Pre-Tokenized Dataset Documentation](tokenized.qmd). + +### Template Free Dataset We reccomend this approach when you want granular control over the prompt formatting, special tokens, and masking, whilst letting Axolotl handle the tokenization. This is very useful if your dataset has unique prompts that differ across samples and where one single general template wouldn't suffice. @@ -182,7 +188,9 @@ datasets: type: input_output ``` -### [Conversation Dataset](conversation.qmd) +Reference: [Template Free Documentation](template_free.qmd). + +### Conversation Dataset `conversation` messages are a list of messages which usually contain a `role` and `content` key. @@ -258,7 +266,7 @@ Newer conversation datasets usually follow the OpenAI format. Axolotl supports both as well as allowing customization of any kind of key. -#### [Chat Template Usage](conversation.qmd#chat_template) +#### Chat Template Usage To properly use this method, it is important to identify three things: @@ -340,9 +348,19 @@ datasets: narrator: ["narrator"] ``` -#### Applying `chat_template` +::: {.callout-tip} +As chat_templates may use hardcoded EOS/EOT tokens that are different from the tokenizer's EOS, it is highly recommended to set them. For example, `ChatML` uses `<|im_end|>` to end turns. + +```yaml +special_tokens: + eos_token: <|im_end|> +``` + +::: -Once all the above steps are completed, you could combine all these configs together to form a bespoke configuration for your custom dataset. The final step would be to correctly set the EOS token in your config: +##### Applying `chat_template` + +Once all the above steps are completed, you could combine all these configs together to form a bespoke configuration for your custom dataset. ```yaml datasets: @@ -391,7 +409,17 @@ If this config were to be applied to the sample dataset above, the output would The first number refers to the label, the second refers to the `token_id`. For example, `-100` labels appear on non-assistant portions, meaning that they are masked during. For assistant portions, the label is the same as the `token_id`. -### [Instruction Dataset](inst_tune.qmd) +::: {.callout-note} + +If during `preprocess`, there are a lot of warnings of `Could not find content __ boundary`, please check the FAQ section for [chat_templates](../faq.qmd#chat-templates). + +::: + +#### Reference + +Please see docs [here](conversation.qmd). + +### Instruction Dataset Instruction datasets are used to train instruction-following models and comprise a prompt, containing an instruction, and a single response. In contrast to chat datasets which may be multi-turn, instruct datasets are typically single-turn. @@ -423,6 +451,9 @@ datasets: Axolotl supports many kinds of instruction dataset. All of them can be found here (https://axolotl-ai-cloud.github.io/axolotl/docs/dataset-formats/inst_tune.html) with their respective type and sample row format. + +Reference: [Instruction Dataset Documentation](inst_tune.qmd). + #### Custom Instruct Prompt Format Due to the myriad possibilities of instruction formats, Axolotl allows customizing your own instruction format without having to dive into the code directly. @@ -453,6 +484,8 @@ datasets: The config sets that the `field_instruction` is actually named `input`, and the `field_input` is empty as we don't have an `input` in this sample. Generally, `instruction` can be thought as the question to the model, and `input` as the additional information with `output` being the response. It is not necessary to have an `input` nor `system`. In the end, the most important part is to understand what format you want it to look like and how you can customize this to your use case. +Reference: [Custom Instruct Prompt Format Documentation](inst_tune.qmd#how-to-add-custom-prompt-format). + ## Reinforcement Learning from Human Feedback (RLHF) -As there are multiple RLHF methods with their own dataset requirements. Please see [RLHF datasets](../rlhf.qmd) documentation for more detail. +As there are multiple RLHF methods with their own dataset requirements. Please see [RLHF documentation](../rlhf.qmd) for more detail. diff --git a/docs/dataset-formats/pretraining.qmd b/docs/dataset-formats/pretraining.qmd index 600fb63e09..b51b0e0b38 100644 --- a/docs/dataset-formats/pretraining.qmd +++ b/docs/dataset-formats/pretraining.qmd @@ -27,7 +27,6 @@ pretraining_dataset: type: pretrain trust_remote_code: skip: # number of rows of data to skip over from the beginning -... ``` ::: diff --git a/docs/dataset-formats/template_free.qmd b/docs/dataset-formats/template_free.qmd index 5087d6a013..c75c5931e8 100644 --- a/docs/dataset-formats/template_free.qmd +++ b/docs/dataset-formats/template_free.qmd @@ -1,7 +1,239 @@ --- title: Template-Free description: Construct prompts without a template. +toc: true +toc-depth: 3 order: 4 --- -See [these docs](../input_output.qmd). +## Background {#sec-background} + +### Masking Inputs {#masking-inputs} + +One of the most popular features of +[axolotl](https://github.com/axolotl-ai-cloud/axolotl) is +setting the following configuration value: + + +```yaml +train_on_inputs: false +``` + +If you declare a [dataset formats](https://github.com/axolotl-ai-cloud/axolotl?tab=readme-ov-file#dataset) +such as `alpaca` or `chatml`, axolotl knows what is an input +(i.e. human) vs. an output (i.e. the assistant) and masks the input +labels so that your model can focus on predicting the outputs only. + +### You may not want prompt templates {#sec-you-may-not-want-prompt-templates} + +However, there are many situations where you don't want to use one of +these formats or templates. This is because they can: + +- Add unnecessary boilerplate to your prompts. +- Create artifacts like special delimiters `<|im_start|>` that can + quickly become footguns if you don't include them correctly at + inference time. +- Enforce a *chat* interface when you do not want one. Sometimes you + just want to fine-tune a model to a very specific task and do NOT + want multi-turn conversations, roles, etc. +- Limit you to only certain roles that the template allows. + +### The `input_output` format {#sec-the-inputoutput-format} + +You can construct your prompts without a template by using the +`input_output` format, by setting `type: input_output` in your +configuration file like this: + +**config.yml** + +```yaml +train_on_inputs: false # Mask segments of your data +datasets: + - path: output.jsonl + type: input_output # use template free prompt construction +``` + +Unlike `type: completion`, which is also template-free, +`type: input_output` allows you to mask segments of your text. More +details on how this works are described below. + +## Usage {#sec-usage} + +This is how you can use the `input_output` format: + +### 1. Prepare Data {#sec-1-prepare-data} + +To use the `input_output` format, collect your data in the following +format into a jsonl file (below is the first row from the file +`output`.jsonl` pretty printed): + +```bash +$ head -n1 output.jsonl | python -m json.tool +``` + +:::{.cell-output .cell-output-stdout} + { + "segments": [ + { + "label": true, + "text": "Hello\n" + }, + { + "label": true, + "text": "hi there!. " + }, + { + "label": false, + "text": "goodbye " + }, + { + "label": true, + "text": "farewell" + } + ] + } +::: + +Set `label:false` when you want to mask a segment of text so that the +model isn't trained on it. Some things to keep in mind: + +> [!IMPORTANT] +> 1. **EOS, BOS, spaces, newlines etc. are entirely up to you. Axolotl + concatenates all the segments as-is.** The tokenizer doesn't add + anything additional. Notice how I added spaces, newlines, `` + (BOS), and `` (EOS) myself. +> 2. Make sure you check the materialized output to validate that the + prompt is getting assembled how you like. + +### 2. Use `type: input_output` {#sec-2-use-type-inputoutput} + +Let's materialize data with our `output.jsonl` file by setting +`type: input_output` in our axolotl config: + +```yaml +# training_config.yaml +base_model: mistralai/Mistral-7B-v0.1 +data_seed: 49 +seed: 49 + +datasets: + - path: output.jsonl + type: input_output +val_set_size: 0.1 + +sequence_len: 896 +sample_packing: false + +micro_batch_size: 2 +gradient_accumulation_steps: 3 +eval_batch_size: 2 +num_epochs: 1 +learning_rate: 0.0002 + +train_on_inputs: false +special_tokens: + bos_token: "" + eos_token: "" + unk_token: "" +``` + +You can use the following command to materialize your data. The +`--debug` flag will print the tokens, along with the labels so you can +verify that the correct items are being ignored: + +```bash +axolotl preprocess training_config.yaml --debug + +... +[2024-03-05 23:36:46,969] [INFO] [axolotl.check_example_labels:35] [PID:607731] [RANK:0] (1, 1) Hello(22557, 22557) +(13, 13) hi(12014, 12014) there(736, 736) !(28808, 28808) .(28723, 28723) (28705, 28705) good(-100, 1179) bye(-100, 17664) (-100, 28705) fare(19111, 19111) well(5458, 5458) (2, 2) + +``` + +The format is `decoded_token`(`label`, `token_id`), for example, +`(1, 1)` means that the token is ``, the label is `1` and the +token_id is `1`. When the label is `-100` then that token is ignored for +training. + +### 3. Check the prompts {#sec-3-check-the-prompts} + +Here is another way to check the materialized output: + +```python +from transformers import AutoTokenizer +from datasets import load_from_disk +import yaml + +directory = !ls last_run_prepared/ +with open('training_config.yaml', 'r') as f: + cfg = yaml.safe_load(f) +model_id = cfg['base_model'] +tok = AutoTokenizer.from_pretrained(model_id) +ds = load_from_disk(f'last_run_prepared/{directory[0]}/') +``` + +```python +>>> row = ds[0] +>>> print(tok.decode(row['input_ids'])) + Hello + hi there!. goodbye farewell +``` + +We can check that the right tokens are ignored by comparing the labels +to each token: + +```python +import pandas as pd +pd.DataFrame([{'token': tok.decode(i), 'label': l, 'id':i} for i,l in + zip(row['input_ids'], row['labels'])]) +``` + +| token | label | id | +|-------|-------|-------| +| 0 | \ | 1 | +| 1 | Hello | 22557 | +| 2 | \\n | 13 | +| 3 | hi | 12014 | +| 4 | there | 736 | +| 5 | ! | 28808 | +| 6 | . | 28723 | +| 7 | | 28705 | +| 8 | good | -100 | +| 9 | bye | -100 | +| 10 | | -100 | +| 11 | fare | 19111 | +| 12 | well | 5458 | +| 13 | \| 2 | + + + +If we look at the input data, the above table seems correct! (The jsonl +version is repeated below for reference): + + +```bash +$ head -n1 output.jsonl | python -m json.tool +``` + +:::{.cell-output .cell-output-stdout} + { + "segments": [ + { + "label": true, + "text": "Hello\n" + }, + { + "label": true, + "text": "hi there!. " + }, + { + "label": false, + "text": "goodbye " + }, + { + "label": true, + "text": "farewell" + } + ] + } +::: diff --git a/docs/debugging.qmd b/docs/debugging.qmd index 4eaa609272..bf3c6fe7e8 100644 --- a/docs/debugging.qmd +++ b/docs/debugging.qmd @@ -31,11 +31,13 @@ While debugging it's helpful to simplify your test scenario as much as possible. - Set `CUDA_VISIBLE_DEVICES` to a single GPU, ex: `export CUDA_VISIBLE_DEVICES=0`. - Set `dataset_processes: 1` in your axolotl config or run the training command with `--dataset_processes=1`. 2. **Use a small dataset**: Construct or use a small dataset from HF Hub. When using a small dataset, you will often have to make sure `sample_packing: False` and `eval_sample_packing: False` to avoid errors. If you are in a pinch and don't have time to construct a small dataset but want to use from the HF Hub, you can shard the data (this will still tokenize the entire dataset, but will only use a fraction of the data for training. For example, to shard the dataset into 20 pieces, add the following to your axolotl config): + ```yaml - dataset: + datasets: ... shards: 20 ``` + 3. **Use a small model**: A good example of a small model is [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0). 4. **Minimize iteration time**: Make sure the training loop finishes as fast as possible, with these settings. - `micro_batch_size: 1` @@ -85,7 +87,7 @@ The easiest way to get started is to modify the [.vscode/launch.json](../.vscode For example, to mimic the command `cd devtools && CUDA_VISIBLE_DEVICES=0 accelerate launch -m axolotl.cli.train dev_chat_template.yml`, you would use the below configuration[^1]. Note that we add additional flags that override the axolotl config and incorporate the tips above (see the comments). We also set the working directory to `devtools` and set the `env` variable `HF_HOME` to a temporary folder that is later partially deleted. This is because we want to delete the HF dataset cache before each run in order to ensure that the data preprocessing code is run from scratch. -```jsonc +```json // .vscode/launch.json { "version": "0.2.0", @@ -132,7 +134,7 @@ For example, to mimic the command `cd devtools && CUDA_VISIBLE_DEVICES=0 acceler Below is the [./vscode/tasks.json](../.vscode/tasks.json) file that defines the `cleanup-for-dataprep` task. This task is run before each debugging session when you use the above configuration. Note how there are two tasks that delete the two folders mentioned above. The third task `cleanup-for-dataprep` is a composite task that combines the two tasks. A composite task is necessary because VSCode does not allow you to specify multiple tasks in the `preLaunchTask` argument of the `launch.json` file. -```jsonc +```json // .vscode/tasks.json // this file is used by launch.json { diff --git a/docs/faq.qmd b/docs/faq.qmd index 3f78bde73c..0a181e022a 100644 --- a/docs/faq.qmd +++ b/docs/faq.qmd @@ -3,6 +3,7 @@ title: FAQ description: Frequently asked questions --- +### General **Q: The trainer stopped and hasn't progressed in several minutes.** @@ -24,6 +25,24 @@ description: Frequently asked questions > A: This is usually an issue with the GPU. This can be resolved through setting the os environment variable `CUDA_VISIBLE_DEVICES=0`. If you are on runpod, this is usually a pod issue. Starting a new pod should take care of it. +### Chat templates + **Q: `jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'content' / 'role' / ____`** > A: This means that the property mapping for the stated attribute does not exist when building `chat_template` prompt. For example, if `no attribute 'content'`, please check you have added the correct mapping for `content` under `message_property_mappings`. + +**Q: `Empty template generated for turn ___`** + +> A: The `content` is empty for that turn. + +**Q: `Could not find content start/end boundary for turn __`** + +> A: The specific turn's start/end could not be detected. Please ensure you have set the `eos_token` following your `chat_template`. Otherwise, this could be a `chat_template` which doesn't use proper boundaries for each turn (like system). On the rare occurrence, make sure your content is not `[[dummy_message]]`. Please let us know about this. + +**Q: `Content end boundary is before start boundary for turn ___`** + +> A: This is an edge case which should not occur. Please create an Issue if this happens. + +**Q: `Content end boundary is the same as start boundary for turn ___. This is likely an empty turn.`** + +> A: This is likely an empty turn. diff --git a/docs/getting-started.qmd b/docs/getting-started.qmd index 2292cde151..8e826b9592 100644 --- a/docs/getting-started.qmd +++ b/docs/getting-started.qmd @@ -1,5 +1,5 @@ --- -title: "Getting Started with Axolotl" +title: "Quickstart" format: html: toc: true @@ -17,12 +17,12 @@ Let's start by fine-tuning a small language model using LoRA. This example uses Assuming `axolotl` is installed (if not, see our [Installation Guide](installation.qmd)) 1. Download example configs: -```shell +```bash axolotl fetch examples ``` 2. Run the training: -```shell +```bash axolotl train examples/llama-3/lora-1b.yml ``` @@ -108,7 +108,7 @@ Please consult the supported [Dataset Formats](dataset-formats/) for more detail 3. Run the training: -```shell +```bash axolotl train my_training.yml ``` @@ -118,7 +118,7 @@ axolotl train my_training.yml After training, test your model: -```shell +```bash axolotl inference my_training.yml --lora-model-dir="./outputs/lora-out" ``` @@ -126,7 +126,7 @@ axolotl inference my_training.yml --lora-model-dir="./outputs/lora-out" For large datasets, preprocess first: -```shell +```bash axolotl preprocess my_training.yml ``` @@ -134,7 +134,7 @@ axolotl preprocess my_training.yml Launch a Gradio interface: -```shell +```bash axolotl inference my_training.yml --lora-model-dir="./outputs/lora-out" --gradio ``` diff --git a/docs/inference.qmd b/docs/inference.qmd index 59e352c181..aded400d04 100644 --- a/docs/inference.qmd +++ b/docs/inference.qmd @@ -1,11 +1,10 @@ --- -title: "Inference Guide" +title: "Inference" format: html: toc: true toc-depth: 3 number-sections: true - code-tools: true execute: enabled: false --- diff --git a/docs/input_output.qmd b/docs/input_output.qmd index 6559578d18..f9d2df2336 100644 --- a/docs/input_output.qmd +++ b/docs/input_output.qmd @@ -3,263 +3,4 @@ title: Template-free prompt construction description: "Template-free prompt construction with the `input_output` format" --- - - -- [Background](#background) - - [Masking Inputs](#masking-inputs) - - [You may not want prompt templates](#you-may-not-want-prompt-templates) - - [The `input_output` format](#the-input_output-format) -- [Usage](#usage) - - [1. Prepare Data](#1-prepare-data) - - [2. Use `type: input_output`](#2-use-type-input_output) - - [3. Check the prompts](#3-check-the-prompts) - - - - - -## Background - - - -### Masking Inputs - -One of the most popular features of -[axolotl](https://github.com/axolotl-ai-cloud/axolotl) is -setting the following configuration value: - - -```yaml -train_on_inputs: false -``` - -If you declare a [dataset formats](https://github.com/axolotl-ai-cloud/axolotl?tab=readme-ov-file#dataset) -such as `alpaca` or `chatml`, axolotl knows what is an input -(i.e. human) vs. an output (i.e. the assistant) and masks the input -labels so that your model can focus on predicting the outputs only. - - - -### You may not want prompt templates - -However, there are many situations where you don't want to use one of -these formats or templates. This is because they can: - -- Add unnecessary boilerplate to your prompts. -- Create artifacts like special delimiters `<|im_start|>` that can - quickly become footguns if you don't include them correctly at - inference time. -- Enforce a *chat* interface when you do not want one. Sometimes you - just want to fine-tune a model to a very specific task and do NOT - want multi-turn conversations, roles, etc. -- Limit you to only certain roles that the template allows. - - - -### The `input_output` format - -You can construct your prompts without a template by using the -`input_output` format, by setting `type: input_output` in your -configuration file like this: - -**config.yml** - -```yaml -train_on_inputs: false # Mask segments of your data -datasets: - - path: output.jsonl - type: input_output # use template free prompt construction -``` - -Unlike `type: completion`, which is also template-free, -`type: input_output` allows you to mask segments of your text. More -details on how this works are described below. - - - -## Usage - -This is how you can use the `input_output` format: - - - -### 1. Prepare Data - -To use the `input_output` format, collect your data in the following -format into a jsonl file (below is the first row from the file -`output`.jsonl` pretty printed): - -```bash -$ head -n1 output.jsonl | python -m json.tool -``` - -:::{.cell-output .cell-output-stdout} - { - "segments": [ - { - "label": true, - "text": "Hello\n" - }, - { - "label": true, - "text": "hi there!. " - }, - { - "label": false, - "text": "goodbye " - }, - { - "label": true, - "text": "farewell" - } - ] - } -::: - -Set `label:false` when you want to mask a segment of text so that the -model isn't trained on it. Some things to keep in mind: - -> [!IMPORTANT] -> 1. **EOS, BOS, spaces, newlines etc. are entirely up to you. Axolotl - concatenates all the segments as-is.** The tokenizer doesn't add - anything additional. Notice how I added spaces, newlines, `` - (BOS), and `` (EOS) myself. -> 2. Make sure you check the materialized output to validate that the - prompt is getting assembled how you like. - - - -### 2. Use `type: input_output` - -Let's materialize data with our `output.jsonl` file by setting -`type: input_output` in our axolotl config: - -```yaml -# training_config.yaml -base_model: mistralai/Mistral-7B-v0.1 -data_seed: 49 -seed: 49 - -datasets: - - path: output.jsonl - type: input_output -val_set_size: 0.1 - -sequence_len: 896 -sample_packing: false - -micro_batch_size: 2 -gradient_accumulation_steps: 3 -eval_batch_size: 2 -num_epochs: 1 -learning_rate: 0.0002 - -train_on_inputs: false -special_tokens: - bos_token: "" - eos_token: "" - unk_token: "" -``` - -You can use the following command to materialize your data. The -`--debug` flag will print the tokens, along with the labels so you can -verify that the correct items are being ignored: - -```bash -$ python -m axolotl.cli.preprocess training_config.yaml --debug - -... -[2024-03-05 23:36:46,969] [INFO] [axolotl.check_example_labels:35] [PID:607731] [RANK:0] (1, 1) Hello(22557, 22557) -(13, 13) hi(12014, 12014) there(736, 736) !(28808, 28808) .(28723, 28723) (28705, 28705) good(-100, 1179) bye(-100, 17664) (-100, 28705) fare(19111, 19111) well(5458, 5458) (2, 2) - -``` - -The format is `decoded_token`(`label`, `token_id`), for example, -`(1, 1)` means that the token is ``, the label is `1` and the -token_id is `1`. When the label is `-100` then that token is ignored for -training. - - - -### 3. Check the prompts - -Here is another way to check the materialized output: - -```python -from transformers import AutoTokenizer -from datasets import load_from_disk -import yaml - -directory = !ls last_run_prepared/ -with open('training_config.yaml', 'r') as f: - cfg = yaml.safe_load(f) -model_id = cfg['base_model'] -tok = AutoTokenizer.from_pretrained(model_id) -ds = load_from_disk(f'last_run_prepared/{directory[0]}/') -``` - -```python ->>> row = ds[0] ->>> print(tok.decode(row['input_ids'])) - Hello - hi there!. goodbye farewell -``` - -We can check that the right tokens are ignored by comparing the labels -to each token: - -```python -import pandas as pd -pd.DataFrame([{'token': tok.decode(i), 'label': l, 'id':i} for i,l in - zip(row['input_ids'], row['labels'])]) -``` - -| token | label | id | -|-------|-------|-------| -| 0 | \ | 1 | -| 1 | Hello | 22557 | -| 2 | \\n | 13 | -| 3 | hi | 12014 | -| 4 | there | 736 | -| 5 | ! | 28808 | -| 6 | . | 28723 | -| 7 | | 28705 | -| 8 | good | -100 | -| 9 | bye | -100 | -| 10 | | -100 | -| 11 | fare | 19111 | -| 12 | well | 5458 | -| 13 | \| 2 | - - - -If we look at the input data, the above table seems correct! (The jsonl -version is repeated below for reference): - - -```bash -$ head -n1 output.jsonl | python -m json.tool -``` - -:::{.cell-output .cell-output-stdout} - { - "segments": [ - { - "label": true, - "text": "Hello\n" - }, - { - "label": true, - "text": "hi there!. " - }, - { - "label": false, - "text": "goodbye " - }, - { - "label": true, - "text": "farewell" - } - ] - } -::: +The documentation moved to [here](dataset-formats/template_free.qmd). diff --git a/docs/installation.qmd b/docs/installation.qmd index f16e814ccc..2be74be0f4 100644 --- a/docs/installation.qmd +++ b/docs/installation.qmd @@ -1,11 +1,10 @@ --- -title: "Installation Guide" +title: "Installation" format: html: toc: true toc-depth: 3 number-sections: true - code-tools: true execute: enabled: false --- diff --git a/docs/lora_optims.qmd b/docs/lora_optims.qmd index 3f8276bc5b..8bee20402e 100644 --- a/docs/lora_optims.qmd +++ b/docs/lora_optims.qmd @@ -1,7 +1,6 @@ --- title: "LoRA Optimizations" -description: "Custom autograd functions and Triton kernels in Axolotl for optimized -LoRA fine-tuning" +description: "Custom autograd functions and Triton kernels in Axolotl for optimized LoRA fine-tuning" --- Inspired by [Unsloth](https://github.com/unslothai/unsloth), we've implemented two @@ -12,6 +11,7 @@ to leverage operator fusion and tensor re-use in order to improve speed and redu memory usage during the forward and backward passes of these calculations. We currently support several common model architectures, including (but not limited to): + - `llama` - `mistral` - `qwen2` diff --git a/docs/mac.qmd b/docs/mac.qmd index 2a83035381..2e6c5c429e 100644 --- a/docs/mac.qmd +++ b/docs/mac.qmd @@ -19,4 +19,5 @@ Current support: - [ ] DeepSpeed Untested: + - FSDP diff --git a/docs/multi-gpu.qmd b/docs/multi-gpu.qmd index fe293b750b..19293bb5b7 100644 --- a/docs/multi-gpu.qmd +++ b/docs/multi-gpu.qmd @@ -1,5 +1,5 @@ --- -title: "Multi-GPU Training Guide" +title: "Multi-GPU" format: html: toc: true @@ -35,7 +35,11 @@ deepspeed: deepspeed_configs/zero1.json ### Usage {#sec-deepspeed-usage} ```{.bash} -accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json +# Passing arg via config +axolotl train config.yml + +# Passing arg via cli +axolotl train config.yml --deepspeed deepspeed_configs/zero1.json ``` ### ZeRO Stages {#sec-zero-stages} @@ -70,25 +74,7 @@ For combining FSDP with QLoRA, see our [dedicated guide](fsdp_qlora.qmd). ### Liger Kernel Integration {#sec-liger} -::: {.callout-note} -Liger Kernel provides efficient Triton kernels for LLM training, offering: - -- 20% increase in multi-GPU training throughput -- 60% reduction in memory usage -- Compatibility with both FSDP and DeepSpeed -::: - -Configuration: - -```{.yaml} -plugins: - - axolotl.integrations.liger.LigerPlugin -liger_rope: true -liger_rms_norm: true -liger_glu_activation: true -liger_layer_norm: true -liger_fused_linear_cross_entropy: true -``` +Please see [docs](custom_integrations.qmd#liger) for more info. ## Troubleshooting {#sec-troubleshooting} diff --git a/docs/multi-node.qmd b/docs/multi-node.qmd index aa6704ab9e..cec8ff45df 100644 --- a/docs/multi-node.qmd +++ b/docs/multi-node.qmd @@ -13,7 +13,7 @@ You will also need to have the same configuration file for your model on each ma Make sure the main machine is reachable by other machines. ::: -# Accelerate +## Accelerate You will need to create a configuration for accelerate, either by using `accelerate config` and follow the instructions or you can use one of the preset below: @@ -51,17 +51,17 @@ fsdp_config: All you have to do now is launch using accelerate as you would usually do on each machine and voila, the processes will start once you have launched accelerate on every machine. -# Raytrain +## Raytrain Please see ray train doc [here](ray-integration.qmd). -# Torchrun +## Torchrun If you are using Infiniband, we recommend torchrun to utilize the full bandwidth. Set the following env (change buffersize/socketname depending on your system): -```yaml +```bash export NCCL_IB_DISABLE=0 export NCCL_SOCKET_IFNAME="eth0,en,eth,em,bond" export NCCL_BUFFSIZE=2097152 diff --git a/docs/nccl.qmd b/docs/nccl.qmd index 3b616aa665..bd2a744124 100644 --- a/docs/nccl.qmd +++ b/docs/nccl.qmd @@ -13,13 +13,13 @@ Often, this timeout will happen after 30 minutes (the default setting) and is ac Forcing cross-GPU communication via [NVLink](https://en.wikipedia.org/wiki/NVLink) may help without increasing timeouts. To verify that your configuration is leveraging NVLink run the following command: -```shell +```bash nvidia-smi nvlink --status ``` To force NCCL to use NVLink, simply set this in the environment: -```shell +```bash export NCCL_P2P_LEVEL=NVL ``` @@ -33,13 +33,13 @@ If NVLink is not available in your environment there are other options for ``NCC To validate that acceptable data transfer speeds exist for your training job, running [NCCL Tests](https://github.com/NVIDIA/nccl-tests/blob/master/README.md) can help pinpoint bottlenecks, for example: -```shell +```bash ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 3 ``` It can be useful when debugging NCCL communication timeouts to activate additional logging in both PyTorch and NCCL: -```shell +```bash export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=ALL export TORCH_DISTRIBUTED_DEBUG=INFO diff --git a/docs/ray-integration.qmd b/docs/ray-integration.qmd index 0a2b45ef5b..edf9e2dafd 100644 --- a/docs/ray-integration.qmd +++ b/docs/ray-integration.qmd @@ -1,5 +1,5 @@ --- -title: Ray Train integration +title: Ray Train description: How to use Axolotl with Ray Train --- @@ -9,7 +9,7 @@ With the `--use-ray` CLI flag, Axolotl will use Ray Train's [`TorchTrainer`](htt ## Ray cluster setup -A prerequisite using the Ray Train integration is to setup a Ray cluster on your desired node(s). For a detailed guide on how you can get started with ray clusters, check the official Ray docs here: https://docs.ray.io/en/latest/cluster/getting-started.html +A prerequisite using the Ray Train integration is to setup a Ray cluster on your desired node(s). For a detailed guide on how you can get started with ray clusters, check the official Ray docs [here](https://docs.ray.io/en/latest/cluster/getting-started.html). Every Ray cluster has one _head_ node and a set of worker nodes. The head node is just like any other worker node, but it also runs certain special processes related to scheduling and orchestration. Ray-enabled scripts are run on the head node and depending on the resources (number of CPUs, GPUs, etc) they request, will be scheduled to run certain tasks on the worker nodes. For more on key concepts behind a Ray cluster, you can refer this [doc](https://docs.ray.io/en/latest/cluster/key-concepts.html#cluster-key-concepts). @@ -58,13 +58,11 @@ You can find an example configuration at `configs/llama-3/lora-1b-ray.yaml`. The key parameters to note here are: ```yaml -... use_ray: true ray_num_workers: 4 # optional resources_per_worker: GPU: 1 -... ``` - `use_ray`: This is the flag that enables the Ray Train integration. You can either use the corresponding `--use-ray` flag in the CLI or set `use_ray` in the config file. diff --git a/docs/rlhf.qmd b/docs/rlhf.qmd index ff150ffbe1..741976bc68 100644 --- a/docs/rlhf.qmd +++ b/docs/rlhf.qmd @@ -3,22 +3,22 @@ title: "RLHF (Beta)" description: "Reinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback." back-to-top-navigation: true toc: true -toc-depth: 3 +toc-depth: 4 --- -# Overview +## Overview Reinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human feedback. Various methods include, but not limited to: -- Proximal Policy Optimization (PPO) (not yet supported in axolotl) - [Direct Preference Optimization (DPO)](#dpo) - [Identity Preference Optimization (IPO)](#ipo) - [Kahneman-Tversky Optimization (KTO)](#kto) - [Odds Ratio Preference Optimization (ORPO)](#orpo) +- Proximal Policy Optimization (PPO) (not yet supported in axolotl) -# RLHF using Axolotl +## RLHF using Axolotl ::: {.callout-important} This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality. @@ -30,7 +30,7 @@ We rely on the [TRL](https://github.com/huggingface/trl) library for implementat You can find what each method supports by going into `src/axolotl/prompt_strategies/{method}` where `{method}` is one of our supported methods. The `type: ` can be retrieved from `{method}.{function_name}`. ::: -## DPO +### DPO Example config: @@ -47,7 +47,7 @@ datasets: DPO supports the following types with the following dataset format: -### chatml.argilla +#### chatml.argilla ```json { @@ -58,7 +58,7 @@ DPO supports the following types with the following dataset format: } ``` -### chatml.argilla_chat +#### chatml.argilla_chat ```json { @@ -73,7 +73,7 @@ DPO supports the following types with the following dataset format: } ``` -### chatml.icr +#### chatml.icr ```json { @@ -84,7 +84,7 @@ DPO supports the following types with the following dataset format: } ``` -### chatml.intel +#### chatml.intel ```json { @@ -95,7 +95,7 @@ DPO supports the following types with the following dataset format: } ``` -### chatml.prompt_pairs +#### chatml.prompt_pairs ```json { @@ -106,7 +106,7 @@ DPO supports the following types with the following dataset format: } ``` -### chatml.ultra +#### chatml.ultra ```json { @@ -123,7 +123,7 @@ DPO supports the following types with the following dataset format: } ``` -### llama3.argilla +#### llama3.argilla ```json { @@ -134,7 +134,7 @@ DPO supports the following types with the following dataset format: } ``` -### llama3.argilla_chat +#### llama3.argilla_chat ```json { @@ -149,7 +149,7 @@ DPO supports the following types with the following dataset format: } ``` -### llama3.icr +#### llama3.icr ```json { @@ -160,7 +160,7 @@ DPO supports the following types with the following dataset format: } ``` -### llama3.intel +#### llama3.intel ```json { @@ -171,7 +171,7 @@ DPO supports the following types with the following dataset format: } ``` -### llama3.prompt_pairs +#### llama3.prompt_pairs ```json { @@ -182,7 +182,7 @@ DPO supports the following types with the following dataset format: } ``` -### llama3.ultra +#### llama3.ultra ```json { @@ -199,7 +199,7 @@ DPO supports the following types with the following dataset format: } ``` -### zephyr.nectar +#### zephyr.nectar ```json { @@ -218,7 +218,7 @@ DPO supports the following types with the following dataset format: } ``` -### chat_template.default +#### chat_template.default ```yaml rl: dpo @@ -264,7 +264,7 @@ Sample input format: } ``` -### user_defined.default +#### user_defined.default For custom behaviors, @@ -295,7 +295,7 @@ The input format is a simple JSON input with customizable fields based on the ab } ``` -## IPO +### IPO As IPO is just DPO with a different loss function, all supported options for DPO works here. @@ -303,7 +303,7 @@ As IPO is just DPO with a different loss function, all supported options for DPO rl: ipo ``` -## ORPO +### ORPO Paper: https://arxiv.org/abs/2403.07691 @@ -320,7 +320,7 @@ datasets: ORPO supports the following types with the following dataset format: -### chat_template.argilla +#### chat_template.argilla ```json { @@ -339,7 +339,7 @@ ORPO supports the following types with the following dataset format: } ``` -## KTO +### KTO ```yaml rl: kto @@ -360,7 +360,7 @@ gradient_checkpointing_kwargs: KTO supports the following types with the following dataset format: -### chatml.argilla +#### chatml.argilla ```json { @@ -370,7 +370,7 @@ KTO supports the following types with the following dataset format: } ``` -### chatml.argilla_chat +#### chatml.argilla_chat ```json { @@ -383,7 +383,7 @@ KTO supports the following types with the following dataset format: } ``` -### chatml.intel +#### chatml.intel ```json { @@ -393,7 +393,7 @@ KTO supports the following types with the following dataset format: } ``` -### chatml.prompt_pairs +#### chatml.prompt_pairs ```json { @@ -403,7 +403,7 @@ KTO supports the following types with the following dataset format: } ``` -### chatml.ultra +#### chatml.ultra ```json { @@ -413,7 +413,7 @@ KTO supports the following types with the following dataset format: } ``` -### llama3.argilla +#### llama3.argilla ```json { @@ -423,7 +423,7 @@ KTO supports the following types with the following dataset format: } ``` -### llama3.argilla_chat +#### llama3.argilla_chat ```json { @@ -434,7 +434,7 @@ KTO supports the following types with the following dataset format: } ``` -### llama3.intel +#### llama3.intel ```json { @@ -444,7 +444,7 @@ KTO supports the following types with the following dataset format: } ``` -### llama3.prompt_pairs +#### llama3.prompt_pairs ```json { @@ -454,7 +454,7 @@ KTO supports the following types with the following dataset format: } ``` -### llama3.ultra +#### llama3.ultra ```json { @@ -464,7 +464,7 @@ KTO supports the following types with the following dataset format: } ``` -### user_defined.default +#### user_defined.default For custom behaviors, @@ -494,7 +494,49 @@ The input format is a simple JSON input with customizable fields based on the ab } ``` -## Using local dataset files +### GRPO + +GRPO uses custom reward functions and transformations. Please have them ready locally. + +For ex, to load OpenAI's GSM8K and use a random reward for completions: + +```python +# rewards.py +import random + +def rand_reward_func(completions, **kwargs) -> list[float]: + return [random.uniform(0, 1) for _ in completions] + +def oai_gsm8k_transform(cfg, *args, **kwargs): + def transform_fn(example, tokenizer=None): + label = example["answer"].split("####")[-1].strip().replace(",", "") + return { + "prompt": [{"role": "user", "content": example["question"]},], + "answer": label, + } + return transform_fn, {"remove_columns": ["question"]} +``` + +```yaml +rl: grpo + +trl: + beta: 0.001 + max_completion_length: 256 + use_vllm: True + vllm_device: auto + vllm_gpu_memory_utilization: 0.15 + num_generations: 4 + reward_funcs: ["rewards.rand_reward_func"] # format: '{file_name}.{fn_name}' +datasets: + - path: openai/gsm8k + name: main + type: rewards.oai_gsm8k_transform # format: '{file_name}.{fn_name}' +``` + +To see other examples of custom reward functions, please see [TRL GRPO Docs](https://github.com/huggingface/trl/blob/main/docs/source/grpo_trainer.md#using-a-custom-reward-function). + +### Using local dataset files ```yaml datasets: @@ -505,7 +547,7 @@ datasets: type: chatml.intel ``` -## TRL auto-unwrapping for PEFT +### TRL auto-unwrapping for PEFT TRL supports auto-unwrapping PEFT models for RL training paradigms which rely on a reference model. This significantly reduces memory pressure as an additional refreference model does not need to be loaded, and reference model log-probabilities can be obtained by disabling PEFT adapters. This is enabled by default. To turn it off, pass the following config: diff --git a/docs/torchao.qmd b/docs/torchao.qmd index 2dc9117fbf..f50fc9ee13 100644 --- a/docs/torchao.qmd +++ b/docs/torchao.qmd @@ -3,6 +3,12 @@ title: "PyTorch ao" description: "Custom data types and layouts for training and inference" --- +To use experimental optimizers (`AdamWFp8`, `AdamW4bit`, `AdamW8bit`) from Pytorch Ao, please install the package as shown below. + +::: {.callout-tip} +Some experimental optimizers are already present in regular Pytorch, so please re-check if you actually need this package! +::: + ### Installation Stable Release from the PyTorch index diff --git a/docs/unsloth.qmd b/docs/unsloth.qmd index 73b2f03036..fd87f7bde0 100644 --- a/docs/unsloth.qmd +++ b/docs/unsloth.qmd @@ -8,6 +8,12 @@ description: "Hyper-optimized QLoRA finetuning for single GPUs" Unsloth provides hand-written optimized kernels for LLM finetuning that slightly improve speed and VRAM over standard industry baselines. +::: {.callout-important} +Due to breaking changes in transformers `v4.48.0`, users will need to downgrade to `<=v4.47.1` to use this patch. + +This will later be deprecated in favor of [LoRA Optimizations](lora_optims.qmd). +::: + ### Installation @@ -17,7 +23,7 @@ The following will install the correct unsloth and extras from source. python scripts/unsloth_install.py | sh ``` -### Using unsloth w Axolotl +### Usage Axolotl exposes a few configuration options to try out unsloth and get most of the performance gains. diff --git a/index.qmd b/index.qmd index ed23d235ad..01572a8bec 100644 --- a/index.qmd +++ b/index.qmd @@ -1,7 +1,7 @@ --- -toc-location: right-body -toc-title: Table Of Contents -toc-expand: 2 +# toc-location: right-body +# toc-title: Table Of Contents +# toc-expand: 2 --- ```{python} diff --git a/src/axolotl/integrations/cut_cross_entropy/README.md b/src/axolotl/integrations/cut_cross_entropy/README.md index c67d7440b9..625cbc0ecc 100644 --- a/src/axolotl/integrations/cut_cross_entropy/README.md +++ b/src/axolotl/integrations/cut_cross_entropy/README.md @@ -1,6 +1,10 @@ # Cut Cross Entropy -### Usage +Cut Cross Entropy reduces VRAM usage through optimization on the cross-entropy operation during loss calculation. + +See https://github.com/apple/ml-cross-entropy + +## Usage ```yaml plugins: @@ -8,3 +12,19 @@ plugins: cut_cross_entropy: true ``` + +## Citation + +```bib +@article{wijmans2024cut, + author = {Erik Wijmans and + Brody Huval and + Alexander Hertzberg and + Vladlen Koltun and + Philipp Kr\"ahenb\"uhl}, + title = {Cut Your Losses in Large-Vocabulary Language Models}, + journal = {arXiv}, + year = {2024}, + url = {https://arxiv.org/abs/2411.09009}, +} +``` diff --git a/src/axolotl/integrations/grokfast/README.md b/src/axolotl/integrations/grokfast/README.md index 4950dde87a..7c678b07e7 100644 --- a/src/axolotl/integrations/grokfast/README.md +++ b/src/axolotl/integrations/grokfast/README.md @@ -2,7 +2,7 @@ See https://github.com/ironjr/grokfast -### Usage +## Usage ```yaml plugins: @@ -11,3 +11,14 @@ plugins: grokfast_alpha: 2.0 grokfast_lamb: 0.98 ``` + +## Citation + +```bib +@article{lee2024grokfast, + title={{Grokfast}: Accelerated Grokking by Amplifying Slow Gradients}, + author={Lee, Jaerin and Kang, Bong Gyun and Kim, Kihoon and Lee, Kyoung Mu}, + journal={arXiv preprint arXiv:2405.20233}, + year={2024} +} +``` diff --git a/src/axolotl/integrations/kd/README.md b/src/axolotl/integrations/kd/README.md new file mode 100644 index 0000000000..4b15ad31dd --- /dev/null +++ b/src/axolotl/integrations/kd/README.md @@ -0,0 +1,23 @@ +# Knowledge Distillation + +## Usage + +```yaml +plugins: + - "axolotl.integrations.kd.KDPlugin" + +kd_trainer: True +kd_ce_alpha: 0.1 +kd_alpha: 0.9 +kd_temperature: 1.0 + +torch_compile: True # torch>=2.5.1, recommended to reduce vram + +datasets: + - path: ... + type: "axolotl.integrations.kd.chat_template" + field_messages: "messages_combined" + logprobs_field: "llm_text_generation_vllm_logprobs" # for kd only, field of logprobs +``` + +An example dataset can be found at [`axolotl-ai-co/evolkit-logprobs-pipeline-75k-v2-sample`](https://huggingface.co/datasets/axolotl-ai-co/evolkit-logprobs-pipeline-75k-v2-sample) diff --git a/src/axolotl/integrations/liger/README.md b/src/axolotl/integrations/liger/README.md new file mode 100644 index 0000000000..16164d72f2 --- /dev/null +++ b/src/axolotl/integrations/liger/README.md @@ -0,0 +1,36 @@ +# Liger Kernel Integration + +Liger Kernel provides efficient Triton kernels for LLM training, offering: + +- 20% increase in multi-GPU training throughput +- 60% reduction in memory usage +- Compatibility with both FSDP and DeepSpeed + +See https://github.com/linkedin/Liger-Kernel + +## Usage + +```yaml +plugins: + - axolotl.integrations.liger.LigerPlugin +liger_rope: true +liger_rms_norm: true +liger_glu_activation: true +liger_layer_norm: true +liger_fused_linear_cross_entropy: true +``` + +## Citation + +```bib +@article{hsu2024ligerkernelefficienttriton, + title={Liger Kernel: Efficient Triton Kernels for LLM Training}, + author={Pin-Lun Hsu and Yun Dai and Vignesh Kothapalli and Qingquan Song and Shao Tang and Siyu Zhu and Steven Shimizu and Shivam Sahni and Haowen Ning and Yanning Chen}, + year={2024}, + eprint={2410.10989}, + archivePrefix={arXiv}, + primaryClass={cs.LG}, + url={https://arxiv.org/abs/2410.10989}, + journal={arXiv preprint arXiv:2410.10989}, +} +``` diff --git a/src/axolotl/integrations/lm_eval/README.md b/src/axolotl/integrations/lm_eval/README.md index 3724c49ccf..f6ed5416ec 100644 --- a/src/axolotl/integrations/lm_eval/README.md +++ b/src/axolotl/integrations/lm_eval/README.md @@ -1,6 +1,10 @@ # LM Eval Harness -### Usage +Run evaluation on model using the popular lm-evaluation-harness library. + +See https://github.com/EleutherAI/lm-evaluation-harness + +## Usage ```yaml plugins: @@ -10,4 +14,22 @@ lm_eval_tasks: - gsm8k - hellaswag - arc_easy + +lm_eval_batch_size: # Batch size for evaluation +output_dir: # Directory to save evaluation results +``` + +## Citation + +```bib +@misc{eval-harness, + author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, + title = {A framework for few-shot language model evaluation}, + month = 07, + year = 2024, + publisher = {Zenodo}, + version = {v0.4.3}, + doi = {10.5281/zenodo.12608602}, + url = {https://zenodo.org/records/12608602} +} ``` diff --git a/src/axolotl/integrations/spectrum/README.md b/src/axolotl/integrations/spectrum/README.md index 192918060e..0f78a511b5 100644 --- a/src/axolotl/integrations/spectrum/README.md +++ b/src/axolotl/integrations/spectrum/README.md @@ -1,15 +1,17 @@ -## Spectrum: Targeted Training on Signal to Noise Ratio +# Spectrum: Targeted Training on Signal to Noise Ratio by Eric Hartford, Lucas Atkins, Fernando Fernandes, David Golchinfar This plugin contains code to freeze the bottom fraction of modules in a model, based on the Signal-to-Noise Ratio (SNR). -### Overview +See https://github.com/cognitivecomputations/spectrum + +## Overview Spectrum is a tool for scanning and evaluating the Signal-to-Noise Ratio (SNR) of layers in large language models. By identifying the top n% of layers with the highest SNR, you can optimize training efficiency. -### Usage +## Usage ```yaml plugins: @@ -19,3 +21,17 @@ spectrum_top_fraction: 0.5 # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror spectrum_model_name: meta-llama/Meta-Llama-3.1-8B ``` + +## Citation + +```bib +@misc{hartford2024spectrumtargetedtrainingsignal, + title={Spectrum: Targeted Training on Signal to Noise Ratio}, + author={Eric Hartford and Lucas Atkins and Fernando Fernandes Neto and David Golchinfar}, + year={2024}, + eprint={2406.06623}, + archivePrefix={arXiv}, + primaryClass={cs.LG}, + url={https://arxiv.org/abs/2406.06623}, +} +``` diff --git a/styles.css b/styles.css index 2e5aa6de8f..891349b4b0 100644 --- a/styles.css +++ b/styles.css @@ -1,5 +1,193 @@ -/* css styles */ +/* TYPOGRAPHY SECTION */ -img[alt="Axolotl"] { - content: url("https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/887513285d98132142bf5db2a74eb5e0928787f1/image/axolotl_logo_digital_black.svg") !important; +/* Import fonts */ +@import url('https://fonts.googleapis.com/css2?family=Be+Vietnam+Pro:wght@400;500&display=swap'); +@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400&display=swap'); + +/* Typography hierarchy */ +:root { + --font-title: 'Be Vietnam Pro', sans-serif; + --font-body: 'JetBrains Mono', monospace; +} + +/* Title (h1) */ +h1 { + font-family: var(--font-title); + font-weight: 400; + font-size: 6rem; + line-height: 1.1; + letter-spacing: -0.05em; + font-feature-settings: "ss01" on; +} + +/* Heading (h2) */ +h2 { + font-family: var(--font-title); + font-weight: 500; + font-size: 2rem; + line-height: 1.2; + letter-spacing: -0.03em; + font-feature-settings: "ss01" on; +} + +/* Subtitle/Preamble */ +h3, +h4 { + font-family: var(--font-body); + font-weight: 400; + font-size: 1.5rem; + line-height: 1.5; + letter-spacing: -0.02em; +} + +/* Body text */ +body { + font-family: var(--font-body); + font-weight: 400; + font-size: 1rem; + line-height: 1.5; + letter-spacing: -0.02em; +} + +/* Links */ +a { + font-family: var(--font-body); + font-weight: 400; + font-size: 0.875rem; + line-height: 1; + letter-spacing: -0.02em; +} + +/* NAV BAR SECTION */ + +/* Navbar logo styling */ +.navbar-brand img { + height: 32px; + margin-right: 10px; +} + +/* COLORS SECTION */ + +/* Brand colors */ +:root { + --white: #ffffff; + --greige-300: #EEEEE7; + --greige-600: #CCCAC0; + --black: #141310; + --lime: #E3F8A8; + --cyan: #A0F4EA; + --purple: #C8D0F8; +} + +/* Base styles */ +body { + background-color: var(--black); + color: var(--greige-300); +} + +/* Navigation */ +.navbar { + background-color: var(--black) !important; +} + +.navbar-dark .navbar-nav .nav-link { + color: var(--greige-300); +} + +.navbar-dark .navbar-nav .nav-link:hover { + color: var(--lime); +} + +/* Sidebar */ +.sidebar-navigation { + background-color: var(--black); + border-right: 1px solid var(--greige-600); +} + +.sidebar nav[role="doc-toc"] ul>li>a { + color: var(--greige-300); +} + +.sidebar nav[role="doc-toc"] ul>li>a:hover { + color: var(--lime); +} + +/* Links */ +a { + color: var(--lime); +} + +a:hover { + color: var(--cyan); +} + +/* Headers */ +h1, +h2, +h3, +h4, +h5, +h6 { + color: var(--white); +} + +/* Code blocks */ +pre { + background-color: #1a1a1a !important; + border: 1px solid var(--greige-600); +} + +/* Tables */ +.table { + color: var(--greige-300); +} + +/* TOC */ +#toc-title { + color: var(--white); +} + +.toc-active { + color: var(--lime) !important; +} + +/* Buttons */ +.btn-primary { + background-color: var(--lime); + color: var(--black); + border: none; +} + +.btn-primary:hover { + background-color: var(--cyan); + color: var(--black); +} + +/* For inline code (single backtick) */ +code { + background-color: #1a1a1a !important; + color: var(--lime) !important; + padding: 2px 4px; + border-radius: 4px; +} + +/* For inline code that is also a link */ +a code { + color: var(--cyan) !important; +} + +/* For code blocks (triple backtick) */ +pre.sourceCode { + background-color: #1a1a1a !important; +} + +/* Make comments in bash/shell scripts green */ +code span.co { + color: #5cb85c !important; +} + +/* Remove underlines from JSON comments and make them green */ +code span.er { + color: #5cb85c !important; + text-decoration: none !important; }