🎯
Focusing
PhD student at UCSD CSE
Highlights
- Pro
Pinned Loading
-
hao-ai-lab/LookaheadDecoding
hao-ai-lab/LookaheadDecoding Public[ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
-
hao-ai-lab/vllm-ltr
hao-ai-lab/vllm-ltr Public[NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank
-
hao-ai-lab/Dynasor
hao-ai-lab/Dynasor PublicSimple extension on vLLM to help you speed up reasoning model without training.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.