Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add vlmm backend #274

Merged
merged 14 commits into from
Sep 3, 2024
Merged

add vlmm backend #274

merged 14 commits into from
Sep 3, 2024

Conversation

NathanHB
Copy link
Member

@NathanHB NathanHB commented Aug 21, 2024

what this PR does:

  • adds vllm as backend for faster inference.

how to use:

 lighteval accelerate --model_args="pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,dtype=bfloat16,vllm,data_parallel_size=2" use_chat_template --tasks "leaderboard|arc:challenge|0|0,extended|ifeval|0|0,lighteval|gsm8k|5|1" output_dir="./evals/"
  • generate untill
  • loglikelihood
  • multiple generation metrics

src/lighteval/data.py Show resolved Hide resolved
src/lighteval/models/model_config.py Outdated Show resolved Hide resolved
src/lighteval/models/model_config.py Outdated Show resolved Hide resolved
src/lighteval/models/model_config.py Outdated Show resolved Hide resolved
src/lighteval/models/model_loader.py Show resolved Hide resolved
src/lighteval/models/vllm_model.py Outdated Show resolved Hide resolved
src/lighteval/models/vllm_model.py Outdated Show resolved Hide resolved
src/lighteval/models/vllm_model.py Show resolved Hide resolved
src/lighteval/models/vllm_model.py Show resolved Hide resolved
src/lighteval/models/vllm_model.py Show resolved Hide resolved
@NathanHB NathanHB merged commit 21934d5 into main Sep 3, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants