This repository has been archived by the owner on Oct 25, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 211
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[NeuralChat] support llava multi-model training. (#784)
- Loading branch information
1 parent
7f0090e
commit ecb4480
Showing
22 changed files
with
2,581 additions
and
2 deletions.
There are no files selected for viewing
66 changes: 66 additions & 0 deletions
66
...xtension_for_transformers/neural_chat/examples/finetuning/multi_modal/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
|
||
# Multi-Modal | ||
|
||
Large Language and Vision Assistant (LLaVA) is a multi-modal training framework that proposed from [Visual Instruction Tuning](https://arxiv.org/abs/2304.08485) and [Improved Baselines with Visual Instruction Tuning](https://arxiv.org/abs/2310.03744). This example demonstrates how to train mult-modal model on Intel Gaudi2. | ||
|
||
## Train | ||
|
||
LLaVA training consists of two stages: (1) feature alignment stage: use our 558K subset of the LAION-CC-SBU dataset to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following data, plus around 515K VQA data from academic-oriented tasks, to teach the model to follow multimodal instructions. | ||
|
||
### Pretraining | ||
|
||
##### Prepare data | ||
Download the 558K subset of the LAION-CC-SBU dataset with BLIP captions from [liuhaotian/LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) in `./pretraining_data` | ||
|
||
##### Training | ||
|
||
Training script with DeepSpeed ZeRO-2: `scripts/pretrain.sh`. | ||
|
||
- `--mm_projector_type mlp2x_gelu`: the two-layer MLP vision-language connector. | ||
- `--vision_tower openai/clip-vit-large-patch14-336`: CLIP ViT-L/14 336px. | ||
- `--use_habana, --use_lazy_mode` for Intel Gaudi2 setting. | ||
|
||
**Note:** If don't set `--use_habana, --use_lazy_mode`, the code can also run on gpus as well. | ||
|
||
### Visual Instruction Tuning | ||
|
||
##### Prepare data | ||
|
||
Please download the annotation of the final mixture our instruction tuning data [llava_v1_5_mix665k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/llava_v1_5_mix665k.json), and download the images from constituting datasets: | ||
|
||
- COCO: [train2017](http://images.cocodataset.org/zips/train2017.zip) | ||
- GQA: [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip) | ||
- OCR-VQA: [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing), **we save all files as `.jpg`** | ||
- TextVQA: [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) | ||
- VisualGenome: [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) | ||
|
||
After downloading all of them, organize the data as follows in `./finetuning_data`, | ||
|
||
``` | ||
├── coco | ||
│ └── train2017 | ||
├── gqa | ||
│ └── images | ||
├── ocr_vqa | ||
│ └── images | ||
├── textvqa | ||
│ └── train_images | ||
└── vg | ||
├── VG_100K | ||
└── VG_100K_2 | ||
``` | ||
|
||
##### Start training! | ||
|
||
Training script with DeepSpeed ZeRO-3: `scripts/finetune.sh`, and lora has been enabled by running `scripts/finetune_lora.sh` | ||
|
||
|
||
New options to note: | ||
|
||
- `--mm_projector_type mlp2x_gelu`: the two-layer MLP vision-language connector. | ||
- `--vision_tower openai/clip-vit-large-patch14-336`: CLIP ViT-L/14 336px. | ||
- `--image_aspect_ratio pad`: this pads the non-square images to square, instead of cropping them; it slightly reduces hallucination. | ||
- `--group_by_modality_length True`: this should only be used when your instruction tuning dataset contains both language (e.g. ShareGPT) and multimodal (e.g. LLaVA-Instruct). It makes the training sampler only sample a single modality (either image or language) during training, which we observe to speed up training by ~25%, and does not affect the final outcome. | ||
- `--use_habana, --use_lazy_mode` for Intel Gaudi2 setting. | ||
|
||
**Note:** If don't set `--use_habana, --use_lazy_mode`, the code can also run on gpus as well. |
265 changes: 265 additions & 0 deletions
265
...ension_for_transformers/neural_chat/examples/finetuning/multi_modal/conversation_utils.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,265 @@ | ||
#!/usr/bin/env python | ||
# -*- coding: utf-8 -*- | ||
# | ||
# Copyright (c) 2021 Intel Corporation | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import dataclasses | ||
from enum import auto, Enum | ||
from typing import List, Tuple | ||
|
||
|
||
class SeparatorStyle(Enum): | ||
"""Different separator style.""" | ||
SINGLE = auto() | ||
TWO = auto() | ||
MPT = auto() | ||
PLAIN = auto() | ||
LLAMA_2 = auto() | ||
|
||
|
||
@dataclasses.dataclass | ||
class Conversation: | ||
"""A class that keeps all conversation history.""" | ||
system: str | ||
roles: List[str] | ||
messages: List[List[str]] | ||
offset: int | ||
sep_style: SeparatorStyle = SeparatorStyle.SINGLE | ||
sep: str = "###" | ||
sep2: str = None | ||
version: str = "Unknown" | ||
|
||
skip_next: bool = False | ||
|
||
def get_prompt(self): | ||
messages = self.messages | ||
if len(messages) > 0 and type(messages[0][1]) is tuple: | ||
messages = self.messages.copy() | ||
init_role, init_msg = messages[0].copy() | ||
init_msg = init_msg[0].replace("<image>", "").strip() | ||
if 'mmtag' in self.version: | ||
messages[0] = (init_role, init_msg) | ||
messages.insert(0, (self.roles[0], "<Image><image></Image>")) | ||
messages.insert(1, (self.roles[1], "Received.")) | ||
else: | ||
messages[0] = (init_role, "<image>\n" + init_msg) | ||
|
||
if self.sep_style == SeparatorStyle.SINGLE: | ||
ret = self.system + self.sep | ||
for role, message in messages: | ||
if message: | ||
if type(message) is tuple: | ||
message, _, _ = message | ||
ret += role + ": " + message + self.sep | ||
else: | ||
ret += role + ":" | ||
elif self.sep_style == SeparatorStyle.TWO: | ||
seps = [self.sep, self.sep2] | ||
ret = self.system + seps[0] | ||
for i, (role, message) in enumerate(messages): | ||
if message: | ||
if type(message) is tuple: | ||
message, _, _ = message | ||
ret += role + ": " + message + seps[i % 2] | ||
else: | ||
ret += role + ":" | ||
elif self.sep_style == SeparatorStyle.MPT: | ||
ret = self.system + self.sep | ||
for role, message in messages: | ||
if message: | ||
if type(message) is tuple: | ||
message, _, _ = message | ||
ret += role + message + self.sep | ||
else: | ||
ret += role | ||
elif self.sep_style == SeparatorStyle.LLAMA_2: | ||
wrap_sys = lambda msg: f"<<SYS>>\n{msg}\n<</SYS>>\n\n" | ||
wrap_inst = lambda msg: f"[INST] {msg} [/INST]" | ||
ret = "" | ||
|
||
for i, (role, message) in enumerate(messages): | ||
if i == 0: | ||
assert message, "first message should not be none" | ||
assert role == self.roles[0], "first message should come from user" | ||
if message: | ||
if type(message) is tuple: | ||
message, _, _ = message | ||
if i == 0: message = wrap_sys(self.system) + message | ||
if i % 2 == 0: | ||
message = wrap_inst(message) | ||
ret += self.sep + message | ||
else: | ||
ret += " " + message + " " + self.sep2 | ||
else: | ||
ret += "" | ||
ret = ret.lstrip(self.sep) | ||
elif self.sep_style == SeparatorStyle.PLAIN: | ||
seps = [self.sep, self.sep2] | ||
ret = self.system | ||
for i, (role, message) in enumerate(messages): | ||
if message: | ||
if type(message) is tuple: | ||
message, _, _ = message | ||
ret += message + seps[i % 2] | ||
else: | ||
ret += "" | ||
else: | ||
raise ValueError(f"Invalid style: {self.sep_style}") | ||
|
||
return ret | ||
|
||
def append_message(self, role, message): | ||
self.messages.append([role, message]) | ||
|
||
def get_images(self, return_pil=False): | ||
images = [] | ||
for i, (role, msg) in enumerate(self.messages[self.offset:]): | ||
if i % 2 == 0: | ||
if type(msg) is tuple: | ||
import base64 | ||
from io import BytesIO | ||
from PIL import Image | ||
msg, image, image_process_mode = msg | ||
if image_process_mode == "Pad": | ||
def expand2square(pil_img, background_color=(122, 116, 104)): | ||
width, height = pil_img.size | ||
if width == height: | ||
return pil_img | ||
elif width > height: | ||
result = Image.new(pil_img.mode, (width, width), background_color) | ||
result.paste(pil_img, (0, (width - height) // 2)) | ||
return result | ||
else: | ||
result = Image.new(pil_img.mode, (height, height), background_color) | ||
result.paste(pil_img, ((height - width) // 2, 0)) | ||
return result | ||
image = expand2square(image) | ||
elif image_process_mode in ["Default", "Crop"]: | ||
pass | ||
elif image_process_mode == "Resize": | ||
image = image.resize((336, 336)) | ||
else: | ||
raise ValueError(f"Invalid image_process_mode: {image_process_mode}") | ||
max_hw, min_hw = max(image.size), min(image.size) | ||
aspect_ratio = max_hw / min_hw | ||
max_len, min_len = 800, 400 | ||
shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) | ||
longest_edge = int(shortest_edge * aspect_ratio) | ||
W, H = image.size | ||
if longest_edge != max(image.size): | ||
if H > W: | ||
H, W = longest_edge, shortest_edge | ||
else: | ||
H, W = shortest_edge, longest_edge | ||
image = image.resize((W, H)) | ||
if return_pil: | ||
images.append(image) | ||
else: | ||
buffered = BytesIO() | ||
image.save(buffered, format="PNG") | ||
img_b64_str = base64.b64encode(buffered.getvalue()).decode() | ||
images.append(img_b64_str) | ||
return images | ||
|
||
def to_gradio_chatbot(self): | ||
ret = [] | ||
for i, (role, msg) in enumerate(self.messages[self.offset:]): | ||
if i % 2 == 0: | ||
if type(msg) is tuple: | ||
import base64 | ||
from io import BytesIO | ||
msg, image, image_process_mode = msg | ||
max_hw, min_hw = max(image.size), min(image.size) | ||
aspect_ratio = max_hw / min_hw | ||
max_len, min_len = 800, 400 | ||
shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) | ||
longest_edge = int(shortest_edge * aspect_ratio) | ||
W, H = image.size | ||
if H > W: | ||
H, W = longest_edge, shortest_edge | ||
else: | ||
H, W = shortest_edge, longest_edge | ||
image = image.resize((W, H)) | ||
buffered = BytesIO() | ||
image.save(buffered, format="JPEG") | ||
img_b64_str = base64.b64encode(buffered.getvalue()).decode() | ||
img_str = f'<img src="data:image/png;base64,{img_b64_str}" alt="user upload image" />' | ||
msg = img_str + msg.replace('<image>', '').strip() | ||
ret.append([msg, None]) | ||
else: | ||
ret.append([msg, None]) | ||
else: | ||
ret[-1][-1] = msg | ||
return ret | ||
|
||
def copy(self): | ||
return Conversation( | ||
system=self.system, | ||
roles=self.roles, | ||
messages=[[x, y] for x, y in self.messages], | ||
offset=self.offset, | ||
sep_style=self.sep_style, | ||
sep=self.sep, | ||
sep2=self.sep2, | ||
version=self.version) | ||
|
||
def dict(self): | ||
if len(self.get_images()) > 0: | ||
return { | ||
"system": self.system, | ||
"roles": self.roles, | ||
"messages": [[x, y[0] if type(y) is tuple else y] for x, y in self.messages], | ||
"offset": self.offset, | ||
"sep": self.sep, | ||
"sep2": self.sep2, | ||
} | ||
return { | ||
"system": self.system, | ||
"roles": self.roles, | ||
"messages": self.messages, | ||
"offset": self.offset, | ||
"sep": self.sep, | ||
"sep2": self.sep2, | ||
} | ||
|
||
|
||
conv_llava_plain = Conversation( | ||
system="", | ||
roles=("", ""), | ||
messages=( | ||
), | ||
offset=0, | ||
sep_style=SeparatorStyle.PLAIN, | ||
sep="\n", | ||
) | ||
|
||
|
||
conv_llava_v1 = Conversation( | ||
system="A chat between a curious human and an artificial intelligence assistant. " | ||
"The assistant gives helpful, detailed, and polite answers to the human's questions.", | ||
roles=("User", "Assistant"), | ||
version="v1", | ||
messages=(), | ||
offset=0, | ||
sep_style=SeparatorStyle.TWO, | ||
sep=" ", | ||
sep2="</s>", | ||
) | ||
|
||
conv_templates = { | ||
"v1": conv_llava_v1, | ||
"plain": conv_llava_plain, | ||
} | ||
|
Oops, something went wrong.