Skip to content

Commit

Permalink
Merge pull request #32 from FederatedAI/develop-1.3
Browse files Browse the repository at this point in the history
Update LLM-1.3
  • Loading branch information
dylan-fan authored Sep 8, 2023
2 parents 5197a6a + 33d86ea commit 3986b7e
Show file tree
Hide file tree
Showing 35 changed files with 4,317 additions and 52 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,14 @@ FATE-LLM is a framework to support federated learning for large language models(

### Standalone deployment
Please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment).
Deploy FATE-Standalone version with 1.11.2 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`
Deploy FATE-Standalone version with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`

### Cluster deployment
Use [FATE-LLM deployment packages](https://github.com/FederatedAI/FATE/wiki/Download#llm%E9%83%A8%E7%BD%B2%E5%8C%85) to deploy, refer to [FATE-Cluster deployment](https://github.com/FederatedAI/FATE#cluster-deployment) for more deployment details.

## Quick Start
- [Offsite-tuning Tutorial: Model Definition and Job Submission](./doc/tutorial/offsite_tuning/Offsite_tuning_tutorial.ipynb)
- [FedIPR Tutorial: Add Watermarks to Your Model](./doc/tutorial/fed_ipr/FedIPR-tutorial.ipynb)
- [Federated ChatGLM-6B Training](./doc/tutorial/ChatGLM-6B_ds.ipynb)
- [GPT-2 Training](./doc/tutorial/GPT2-example.ipynb)
- [Builtin Models](./doc/tutorial/builtin_models.md)
- [Builtin Models In PELLM](./doc/tutorial/builtin_models.md)
15 changes: 15 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
## Release 1.2.0
### Major Features and Improvements
* FTL-LLM(Fedrated Learning + Transfer Learning + LLM)
* Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)now supported
* Framework available for Emulator and Adapter development
* New Offsite-Tuning Trainer introduced
* Includes built-in models such as GPT-2 family, Llama7b, and Bloom family
* FedIPR
* Introduced WatermarkDataset as the foundational dataset class for backdoor-based watermarks
* Added SignConv and SignLayerNorm blocks for feature-based watermark models
* New FedIPR Trainer available
* Built-in models with feature-based watermarks include Alexnet, Resnet18, DistilBert, and GPT2
* More models support parameter-efficient fine-tuning: ChatGLM2-6B and Bloom-7B1


## Release 1.2.0
### Major Features and Improvements
* Support Federated Training of LLaMA-7B with parameter-efficient fine-tuning.
Expand Down
5 changes: 4 additions & 1 deletion doc/tutorial/builtin_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,10 @@ After reading the training tutorial above, it's easy to use other models listing

| Model | ModuleName | ClassName | DataSetName |
| -------------- | ----------------- | --------------------------------- | ---------------- |
| LLaMA-7B | pellm.llama | LLAMAForCausalLM | llama_tokenizer |
| Bloom-7B1 | pellm.bloom | BloomForCausalLM | prompt_tokenizer |
| LLaMA-2-7B | pellm.llama | LLAMAForCausalLM | prompt_tokenizer |
| LLaMA-7B | pellm.llama | LLAMAForCausalLM | prompt_tokenizer |
| ChatGLM2-6B | pellm.chatglm | ChatGLMForConditionalGeneration | glm_tokenizer |
| ChatGLM-6B | pellm.chatglm | ChatGLMForConditionalGeneration | glm_tokenizer |
| GPT-2 | pellm.gpt2 | GPT2 | nlp_tokenizer |
| ALBERT | pellm.albert | Albert | nlp_tokenizer |
Expand Down
Loading

0 comments on commit 3986b7e

Please sign in to comment.