Releases: FederatedAI/FATE-LLM
Release v2.2.0
By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.
Major Features and Improvements
- Integrate the PDSS algorithm, a novel framework that enhances local small language models (SLMs) using differentially private protected Chain of Thoughts (Cot) generated by remote LLMs:
- Implement InferDPT for privacy-preserving Cot generation.
- Support an encoder-decoder mechanism for privacy-preserving Cot generation.
- Add prefix trainers for step-by-step distillation and text encoder-decoder training.
- Integrate the FDKT algorithm, a framework that enables domain-specific knowledge transfer from LLMs to SLMs while preserving SLM data privacy
- Deployment Optimization: support installation of FATE-LLM by PyPi
Release v2.1.0
By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.
Major Features and Improvements
- Introducing FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models.
- Support three distinct scenarios: Heterogeneous, Homogeneous and One-to-One
- Support LLM to SLM one-way knowledge transfer
- Introducing InferDPT: Privacy-preserving Inference for Black-box Large Language Models. InferDPT leverages differential privacy (DP) to facilitate privacy-preserving inference for large language models.
- Introducing FATE-LLM Evaluate: to evaluate FATE-LLM models in few lines with Python SDK or simple CLI commands, built-in cases included.
Release v2.0.0
By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.
Major Features and Improvements
- Adapt to fate-v2.0 framework:
- Migrate parameter-efficient fine-tuning training methods and models.
- Migrate Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)
- Newly trainer,dataset, data_processing function design
- New FedKSeed Federated Tuning Algorithm: train large language models in a federated learning setting with extremely low communication cost
Release v1.3.0
By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.
Major Features and Improvements
- FTL-LLM(Fedrated Learning + Transfer Learning + LLM)
- Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)now supported
- Framework available for Emulator and Adapter development
- New Offsite-Tuning Trainer introduced
- Includes built-in models such as GPT-2 family, Llama7b, and Bloom family
- FedIPR
- Introduced WatermarkDataset as the foundational dataset class for backdoor-based watermarks
- Added SignConv and SignLayerNorm blocks for feature-based watermark models
- New FedIPR Trainer available
- Built-in models with feature-based watermarks include Alexnet, Resnet18, DistilBert, and GPT2
- More models support parameter-efficient fine-tuning: ChatGLM2-6B and Bloom-7B1
Release v1.2.0
By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.
Major Features and Improvements
- Support Federated Training of LLaMA-7B with parameter-efficient fine-tuning.
Release v1.1.0
By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.
Major Features and Improvements
- Support Federated Training of ChatGLM-6B with parameter-efficient fine-tuning adapters: like Lora and P-Tuning V2 etc.
- Integration of
peft
, which support many parameter-efficient adapters.