Working on LLMs, with a focus on Parameter-efficient Fine-tuning (PEFT) and Mixture-of-Experts (MoE).
-
HKUST
- Hong Kong
-
03:07
(UTC +08:00)
Highlights
- Pro
Pinned Loading
-
juyongjiang/CodeLLMSurvey
juyongjiang/CodeLLMSurvey PublicThe official GitHub page for the survey paper "A Survey on Large Language Models for Code Generation".
-
withinmiaov/A-Survey-on-Mixture-of-Experts-in-LLMs
withinmiaov/A-Survey-on-Mixture-of-Experts-in-LLMs PublicThe official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".
-
peft
peft PublicForked from huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
-
LLM-Adapters
LLM-Adapters PublicForked from AGI-Edgerunners/LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
Python
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.