You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: applications/Chat/README.md
+2-4
Original file line number
Diff line number
Diff line change
@@ -200,7 +200,6 @@ We provide an online inference server and a benchmark. We aim to run inference o
200
200
We support 8-bit quantization (RTN), 4-bit quantization (GPTQ), and FP16 inference.
201
201
202
202
Online inference server scripts can help you deploy your own services.
203
-
204
203
For more details, see [`inference/`](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat/inference).
205
204
206
205
## Coati7B examples
@@ -428,7 +427,7 @@ Thanks so much to all of our amazing contributors!
428
427
</a>
429
428
</div>
430
429
431
-
- An open-source lowcost solution for cloning [ChatGPT](https://openai.com/blog/chatgpt/) with a complete RLHF pipeline. [[demo]](https://chat.colossalai.org)
430
+
- An open-source low-cost solution for cloning [ChatGPT](https://openai.com/blog/chatgpt/) with a complete RLHF pipeline. [[demo]](https://chat.colossalai.org)
Copy file name to clipboardexpand all lines: applications/Chat/evaluate/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -348,7 +348,7 @@ For example, if you want to add a new metric `persuasiveness` into category `bra
348
348
349
349
<details><summary><b>How can I add a new UniEval evaluation metric?</b></summary>
350
350
351
-
For example, if you want to add a new metric `persuasiveness` into task `data2text`, you should add a Boolean QA question about the metric in function `add_question` in `unieval/utils.py`. Please do note that how effectively the model would evaluate this metric is unknown and you may need some experiments to test whether the model is capable of evaluating this metric.
351
+
For example, if you want to add a new metric `persuasiveness` into task `data2text`, you should add a Boolean QA question about the metric in function `add_question` in `unieval/utils.py`. Please do note that how effectively the model would evaluate this metric is unknown, and you may need some experiments to test whether the model is capable of evaluating this metric.
Copy file name to clipboardexpand all lines: applications/Chat/examples/community/peft/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ pip install .
20
20
21
21
For SFT training, just call train_peft_sft.py
22
22
23
-
Its arguments are almost identical to train_sft.py instead adding a new eval_dataset if you have a eval_dataset file. The data file is just a plain datafile, please check the format in the easy_dataset.py.
23
+
Its arguments are almost identical to train_sft.py instead adding a new eval_dataset if you have an eval_dataset file. The data file is just a plain datafile, please check the format in the easy_dataset.py.
24
24
25
25
For stage-3 rlhf training, call train_peft_prompts.py.
26
26
Its arguments are almost identical to train_prompts.py. The only difference is that I use text files to indicate the prompt and pretrained data file. The models are included in easy_models.py. Currently only bloom models are tested, but technically gpt2/opt/llama should be supported.
0 commit comments