Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
-
Updated
Aug 5, 2025 - Python
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Accessible large language models via k-bit quantization for PyTorch.
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
chatglm 6b finetuning and alpaca finetuning
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
🐋MindChat(漫谈)——心理大模型:漫谈人生路, 笑对风霜途
🌿孙思邈中文医疗大模型(Sunsimiao):提供安全、可靠、普惠的中文医疗大模型
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
LongQLoRA: Extent Context Length of LLMs Efficiently
Small finetuned LLMs for a diverse set of useful tasks
Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow
Tuning the Finetuning: An exploration of achieving success with QLoRA
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."