Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Add Additional Metrics to vLLM Server #12627

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sahelib25
Copy link

@sahelib25 sahelib25 commented Jan 31, 2025

This PR will add below metrics to the vLLM Server:

Metric Name Type Unit
model_load_time Guage Seconds
max_token_capacity Gauge Tokens
time_per_prefill_token Histogram Milliseconds
total_tokens_in_current_batch Gauge Tokens
total_tokens_in_queue (prefill + decode) Gauge Tokens
request_with_evicted_tokens Counter Count
total_evicted_tokens Counter Tokens

FIX #5041

* Add metrics model_load_time and max_token_capacity

* Add time_per_prefill_token

* Add total_tokens_in_current_batch

* Add total_tokens_in_queue (prefill + decode)

* Add request_with_evicted_tokens

* Add total_evicted_tokens and fix for request_with_evicted_tokens.

* Fix max_token_capacity metric

* Fix code to have consistent naming of variables

* Update metrics.py

* Fix model_load_time metric and update scripts.

* Update Scripts.

* Revert changes.

* Fix formatting

* Fix model_loader.py script

* Add tests.

* Fix pre-commit errors.

* Make ruff happy.

* Fix to track evictions in GPU mode.

* Fix to track evictions in GPU mode.

* Fix to track evictions in GPU mode.

* fix merge conflicts.

* fix merge conflicts.

* fix merge conflicts.

* fix merge conflicts.

* Fix formatting
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@robertgshaw2-redhat
Copy link
Collaborator

Are these really necessary? I am cautious about maintenance burden of an endless increase in metrics

@miladm
Copy link
Collaborator

miladm commented Jan 31, 2025

cc @ManfeiBai @lsy323 for viz. Would be great to capture these metrics in TPU nightly regressions if/when these metrics land.

@achandrasekar
Copy link

@robertgshaw2-redhat This is the left over list from the initial ones we requested here - #5041. The ones around model load time, token capacity, tokens in batch + queue are being used for startup latency improvements + autoscaling recommendations and would be valuable.

Let me know if there are specific metrics which doesn't make sense to add or is detrimental to performance.

@bvrockwell
Copy link
Contributor

thanks @achandrasekar we should know if there are any that are detrimental to performance.

Could you please help verify this on your side?

Copy link
Collaborator

simon-mo commented Feb 2, 2025

The load time sounds good to me. The rest might be fine if they are easy to derive from current metrics but we do need to be a bit careful about performance regression/complexity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: Additional metrics to enable better autoscaling / load balancing of vLLM servers in Kubernetes
6 participants