Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update 2.0.0 #60

Merged
merged 54 commits into from
Mar 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
e3e6df5
update code for llm-2.0
mgqa34 Feb 19, 2024
bc1a310
update dataset for llm-2.0
mgqa34 Feb 19, 2024
8d25ba8
fix chatglm2 detection
mgqa34 Feb 20, 2024
948efd2
Merge pull request #44 from FederatedAI/feature-2.0.0-pellm
talkingwallace Feb 20, 2024
6231f45
Add offsite-tuning contents Signed-off-by: weijingchen <talkingwallac…
talkingwallace Feb 20, 2024
dad2cde
delete unused code
mgqa34 Feb 20, 2024
127c427
Add offsite-tuning support Signed-off-by: weijingchen <talkingwallace…
talkingwallace Feb 21, 2024
9238182
Merge pull request #45 from FederatedAI/dev-2.0.0-ot
mgqa34 Feb 22, 2024
8733232
Update modelzoo Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Feb 22, 2024
29af45a
Merge pull request #46 from FederatedAI/dev-2.0.0-ot
mgqa34 Feb 22, 2024
32ae458
feat: implement fedkseed (#47)
sagewe Feb 22, 2024
3b77e4d
feat: add fedkseed runner (#47)
sagewe Feb 22, 2024
40ecdeb
fix: refactor args (#47)
sagewe Feb 22, 2024
321f426
Fix ot bug Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Feb 22, 2024
69bc8a6
support qwen
mgqa34 Feb 23, 2024
76e2d5f
Merge pull request #49 from FederatedAI/feature-2.0.0-pellm
mgqa34 Feb 23, 2024
8c26e81
Merge pull request #50 from FederatedAI/dev-2.0.0-ot-fix
mgqa34 Feb 23, 2024
18f82a6
update data_collator and tokenizer
mgqa34 Feb 23, 2024
d25c4a1
Merge pull request #51 from FederatedAI/feature-2.0.0-pellm
mgqa34 Feb 23, 2024
b7c02c7
Fix codes Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Feb 26, 2024
63ace1d
Merge pull request #53 from FederatedAI/dev-2.0.0-fix-ot-files
mgqa34 Feb 26, 2024
86fa886
Remove model_zoo codes & fix runner Signed-off-by: weijingchen <talki…
talkingwallace Feb 26, 2024
a8add72
update code to support seq_cls in pellm
mgqa34 Feb 26, 2024
2a20d86
fix trainer save trainable bug
mgqa34 Feb 26, 2024
1f4b6e5
Merge pull request #54 from FederatedAI/feature-2.0.0-pellm
mgqa34 Feb 26, 2024
17e2d49
Fix model forward Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Feb 26, 2024
2e31a24
Merge pull request #55 from FederatedAI/dev-2.0.0-fix-ot-files
mgqa34 Feb 26, 2024
7654d93
update requirements
mgqa34 Feb 26, 2024
fa157a5
Merge pull request #56 from FederatedAI/feature-2.0.0-pellm
talkingwallace Feb 26, 2024
e0e7cac
fix: add hf dataset and hf model (#47)
sagewe Feb 26, 2024
af63457
chore: add log (#47)
sagewe Feb 27, 2024
50fa242
Fix multi-GPU bugs Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Feb 27, 2024
7175325
Merge pull request #57 from FederatedAI/dev-2.0.0-fix-ot-files
mgqa34 Feb 27, 2024
7661074
Merge pull request #48 from FederatedAI/dev-2.0.0-fedkseed
mgqa34 Feb 27, 2024
7ad859f
fix pellm
mgqa34 Feb 29, 2024
f6924c4
docs: add fedkseed (#47)
sagewe Feb 29, 2024
b84c88f
update doc of pellm
mgqa34 Feb 29, 2024
7c2e019
Merge pull request #59 from FederatedAI/dev-2.0.0-fedkseed
mgqa34 Feb 29, 2024
955e511
update release note of fate-llm 2.0
mgqa34 Feb 29, 2024
8fcc0a6
Merge pull request #58 from FederatedAI/feature-2.0.0-pellm
mgqa34 Feb 29, 2024
5397005
chore: add licence (#47)
sagewe Mar 1, 2024
6133366
Merge pull request #62 from FederatedAI/dev-2.0.0-fedkseed
sagewe Mar 1, 2024
646c6ac
update doc of ChatGLM3
mgqa34 Mar 1, 2024
0261e17
Merge pull request #63 from FederatedAI/feature-2.0.0-update_pellm_doc
mgqa34 Mar 1, 2024
d01a77a
Update ot doc Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Mar 1, 2024
efc1074
Fix doc path Signed-off-by: weijingchen <talkingwallace@sohu.com>
talkingwallace Mar 1, 2024
2cf6dbc
Merge pull request #64 from FederatedAI/dev-2.0.0-llm-doc
mgqa34 Mar 1, 2024
c7cdd02
Update Readme Signed-off-by: weijingchen <talkingwallace@sohu.com>gi
talkingwallace Mar 1, 2024
c7a3822
Merge pull request #65 from FederatedAI/dev-2.0.0-llm-doc
mgqa34 Mar 1, 2024
7a9a02a
update readme
mgqa34 Mar 6, 2024
10643aa
update builtin_pellm_models doc
mgqa34 Mar 6, 2024
9de48e8
fix builtin_pellm_models doc
mgqa34 Mar 6, 2024
c53749b
update deployment desc of llm-2.0
mgqa34 Mar 6, 2024
fea0580
Merge pull request #69 from FederatedAI/feature-2.0.0-update_doc
mgqa34 Mar 6, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ FATE-LLM is a framework to support federated learning for large language models(

### Standalone deployment
Please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment).
Deploy FATE-Standalone version with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`
* To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then make a new directory `{fate_install}/fate_llm` and clone the code into it, install the python requirements, and add `{fate_install}/fate_llm/python` to `PYTHONPATH`
* To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`

### Cluster deployment
Use [FATE-LLM deployment packages](https://github.com/FederatedAI/FATE/wiki/Download#llm%E9%83%A8%E7%BD%B2%E5%8C%85) to deploy, refer to [FATE-Cluster deployment](https://github.com/FederatedAI/FATE#cluster-deployment) for more deployment details.

## Quick Start
- [Offsite-tuning Tutorial: Model Definition and Job Submission](./doc/tutorial/offsite_tuning/Offsite_tuning_tutorial.ipynb)
- [FedIPR Tutorial: Add Watermarks to Your Model](./doc/tutorial/fed_ipr/FedIPR-tutorial.ipynb)
- [Federated ChatGLM-6B Training](./doc/tutorial/parameter_efficient_llm/ChatGLM-6B_ds.ipynb)
- [GPT-2 Training](./doc/tutorial/parameter_efficient_llm/GPT2-example.ipynb)
- [Builtin Models In PELLM](./doc/tutorial/builtin_models.md)
- [Federated ChatGLM3-6B Training](./doc/tutorial/parameter_efficient_llm/ChatGLM3-6B_ds.ipynb)
- [Builtin Models In PELLM](./doc/tutorial/builtin_pellm_models.md)
- [Offsite Tuning Tutorial](./doc/tutorial/offsite_tuning/Offsite_tuning_tutorial.ipynb)
- [FedKSeed](./doc/tutorial/fedkseed/fedkseed-example.ipynb)
8 changes: 8 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
## Release 2.0.0
### Major Features and Improvements
* Adapt to fate-v2.0 framework:
* Migrate parameter-efficient fine-tuning training methods and models.
* Migrate Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)
* Newly trainer,dataset, data_processing function design
* New FedKSeed Federated Tuning Algorithm: train large language models in a federated learning setting with extremely low communication cost

## Release 1.3.0
### Major Features and Improvements
* FTL-LLM(Fedrated Learning + Transfer Learning + LLM)
Expand Down
21 changes: 0 additions & 21 deletions doc/tutorial/builtin_models.md

This file was deleted.

21 changes: 21 additions & 0 deletions doc/tutorial/builtin_pellm_models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
## Builtin PELLM Models
FATE-LLM provide some builtin pellm models, users can use them simply to efficiently train their language models.
To use these models, please read the using tutorial of [ChatGLM-6B Training Guide](./ChatGLM-6B_ds.ipynb).
After reading the training tutorial above, it's easy to use other models listing in the following tabular by changing `module_name`, `class_name`, `dataset` list below.



| Model | ModuleName | ClassName | DataSetName |
| -------------- | ----------------- | --------------| --------------- |
| Qwen2 | pellm.qwen | Qwen | prompt_dataset |
| Bloom-7B1 | pellm.bloom | Bloom | prompt_dataset |
| LLaMA-2-7B | pellm.llama | LLaMa | prompt_dataset |
| LLaMA-7B | pellm.llama | LLaMa | prompt_dataset |
| ChatGLM3-6B | pellm.chatglm | ChatGLM | prompt_dataset |
| GPT-2 | pellm.gpt2 | GPT2 | seq_cls_dataset |
| ALBERT | pellm.albert | Albert | seq_cls_dataset |
| BART | pellm.bart | Bart | seq_cls_dataset |
| BERT | pellm.bert | Bert | seq_cls_dataset |
| DeBERTa | pellm.deberta | Deberta | seq_cls_dataset |
| DistilBERT | pellm.distilbert | DistilBert | seq_cls_dataset |
| RoBERTa | pellm.roberta | Roberta | seq_cls_dataset |
Loading