Skip to content

Commit

Permalink
Merge pull request #118 from FederatedAI/dev-2.2.0
Browse files Browse the repository at this point in the history
Merge v2.2.0's  Updates
  • Loading branch information
mgqa34 authored Aug 2, 2024
2 parents c9b96e6 + e5eb458 commit c0ae102
Show file tree
Hide file tree
Showing 76 changed files with 5,897 additions and 86 deletions.
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,13 @@ FATE-LLM is a framework to support federated learning for large language models(
<img src="./doc/images/fate-llm-plan.png">
</div>

## Deployment

### Standalone deployment
Please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment).
* To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then make a new directory `{fate_install}/fate_llm` and clone the code into it, install the python requirements, and add `{fate_install}/fate_llm/python` to `PYTHONPATH`
* To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`
* To deploy FATE-LLM v2.2.0 or higher version, three ways are provided, please refer [deploy tutorial](./doc/standalone_deploy.md) for more details:
* deploy with FATE only from pypi then using Launcher to run tasks
* deploy with FATE、FATE-Flow、FATE-Client from pypi, user can run tasks with Pipeline
* To deploy lower versions: please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment).
* To deploy FATE-LLM v2.0.* - FATE-LLM v2.1.*, deploy FATE-Standalone with version >= 2.1, then make a new directory `{fate_install}/fate_llm` and clone the code into it, install the python requirements, and add `{fate_install}/fate_llm/python` to `PYTHONPATH`
* To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`

### Cluster deployment
Use [FATE-LLM deployment packages](https://github.com/FederatedAI/FATE/wiki/Download#llm%E9%83%A8%E7%BD%B2%E5%8C%85) to deploy, refer to [FATE-Cluster deployment](https://github.com/FederatedAI/FATE#cluster-deployment) for more deployment details.
Expand All @@ -33,6 +34,8 @@ with Communication Cost under 18 Kilobytes](./doc/tutorial/fedkseed/)
- [InferDPT: Privacy-preserving Inference for Black-box Large Language Models](./doc/tutorial/inferdpt/inferdpt_tutorial.ipynb)
- [FedMKT: Federated Mutual Knowledge Transfer for Large and Small
Language Models](./doc/tutorial/fedmkt/)
- [PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models](./doc/tutorial/pdss)
- [FDKT: Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data](./doc/tutorial/fdkt)

## FATE-LLM Evaluate

Expand Down
10 changes: 10 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
## Release 2.2.0
### Major Features and Improvements
* Integrate the PDSS algorithm, a novel framework that enhances local small language models (SLMs) using differentially private protected Chain of Thoughts (Cot) generated by remote LLMs:
* Implement InferDPT for privacy-preserving Cot generation.
* Support an encoder-decoder mechanism for privacy-preserving Cot generation.
* Add prefix trainers for step-by-step distillation and text encoder-decoder training.
* Integrate the FDKT algorithm, a framework that enables domain-specific knowledge transfer from LLMs to SLMs while preserving SLM data privacy
* Deployment Optimization: support installation of FATE-LLM by PyPi


## Release 2.1.0
### Major Features and Improvements
* New FedMKT Federated Tuning Algorithms: Federated Mutual Knowledge Transfer for Large and Small Language Models
Expand Down
78 changes: 78 additions & 0 deletions doc/standalone_deploy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# FATE-LLM Single-Node Deployment Guide

## 1. Introduction

**Server Configuration:**

- **Quantity:** 1
- **Configuration:** 8 cores / 16GB memory / 500GB hard disk / GPU Machine
- **Operating System:** CentOS Linux release 7
- **User:** User: app owner:apps

The single-node version provides 3 deployment methods, which can be selected based on your needs:
- Install FATE-LLM from PyPI With FATE
- Install FATE-LLM from PyPI with FATE, FATE-Flow, FATE-Client

## 2. Install FATE-LLM from PyPI With FATE
In this way, user can run tasks with Launcher, a convenient way for fast experimental using.

### 2.1 Installing Python Environment
- Prepare and install [conda](https://docs.conda.io/projects/miniconda/en/latest/) environment.
- Create a virtual environment:

```shell
# FATE-LLM requires Python >= 3.10
conda create -n fate_env python=3.10
conda activate fate_env
```

### 2.2 Installing FATE-LLM
This section introduces how to install FATE-LLM from pypi with FATE, execute the following command to install FATE-LLM.

```shell
pip install fate_llm[fate]==2.2.0
```

### 2.3 Usage
After installing successfully, please refer to [tutorials](../README.md#quick-start) to run tasks, tasks describe in the tutorials running will Launcher are all supported.


## 3. Install FATE-LLM from PyPI with FATE, FATE-Flow, FATE-Client
In this way, user can run tasks with Pipeline or Launcher.

### 3.1 Installing Python Environment
Please refer to section-2.1

### 3.2 Installing FATE-LLM with FATE, FATE-Flow, FATE-Client

```shell
pip install fate_client[fate,fate_flow,fate_client]==2.2.0
```

### 3.3 Service Initialization

```shell
mkdir fate_workspace
fate_flow init --ip 127.0.0.1 --port 9380 --home $(pwd)/fate_workspace
pipeline init --ip 127.0.0.1 --port 9380
```
- `ip`: The IP address where the service runs.
- `port`: The HTTP port the service runs on.
- `home`: The data storage directory, including data, models, logs, job configurations, and SQLite databases.

### 3.4 Start Fate-Flow Service

```shell
fate_flow start
fate_flow status # make sure fate_flow service is started
```

FATE-Flow also provides other instructions like stop and restart, use only if users want to stop/restart fate_flow services.
```shell
# Warning: normal installing process does not need to execute stop/restart instructions.
fate_flow stop
fate_flow restart
```

### 3.5 Usage
Please refer to [tutorials](../README.md#quick-start) for more usage guides, tasks describe in the tutorials running will Pipeline or Launcher are all supported.
14 changes: 14 additions & 0 deletions doc/tutorial/fdkt/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# FATE-LLM: FDKT
The algorithm is based on paper [Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data](https://arxiv.org/pdf/2405.14212),
a novel framework that enables domain-specific knowledge transfer from LLMs to SLMs while preserving SLM data privacy.

## Citation
If you publish work that uses FDKT, please cite FDKT as follows:
```
@article{li2024federated,
title={Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data},
author={Li, Haoran and Zhao, Xinyuan and Guo, Dadi and Gu, Hanlin and Zeng, Ziqian and Han, Yuxing and Song, Yangqiu and Fan, Lixin and Yang, Qiang},
journal={arXiv preprint arXiv:2405.14212},
year={2024}
}
```
Loading

0 comments on commit c0ae102

Please sign in to comment.