-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Add Additional Metrics to vLLM Server #12627
base: main
Are you sure you want to change the base?
Conversation
* Add metrics model_load_time and max_token_capacity * Add time_per_prefill_token * Add total_tokens_in_current_batch * Add total_tokens_in_queue (prefill + decode) * Add request_with_evicted_tokens * Add total_evicted_tokens and fix for request_with_evicted_tokens. * Fix max_token_capacity metric * Fix code to have consistent naming of variables * Update metrics.py * Fix model_load_time metric and update scripts. * Update Scripts. * Revert changes. * Fix formatting * Fix model_loader.py script * Add tests. * Fix pre-commit errors. * Make ruff happy. * Fix to track evictions in GPU mode. * Fix to track evictions in GPU mode. * Fix to track evictions in GPU mode. * fix merge conflicts. * fix merge conflicts. * fix merge conflicts. * fix merge conflicts. * Fix formatting
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Are these really necessary? I am cautious about maintenance burden of an endless increase in metrics |
cc @ManfeiBai @lsy323 for viz. Would be great to capture these metrics in TPU nightly regressions if/when these metrics land. |
@robertgshaw2-redhat This is the left over list from the initial ones we requested here - #5041. The ones around model load time, token capacity, tokens in batch + queue are being used for startup latency improvements + autoscaling recommendations and would be valuable. Let me know if there are specific metrics which doesn't make sense to add or is detrimental to performance. |
thanks @achandrasekar we should know if there are any that are detrimental to performance. Could you please help verify this on your side? |
The load time sounds good to me. The rest might be fine if they are easy to derive from current metrics but we do need to be a bit careful about performance regression/complexity. |
This PR will add below metrics to the vLLM Server:
FIX #5041