-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rename to vLLM #150
Merged
Merged
Rename to vLLM #150
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
zhuohan123
approved these changes
Jun 17, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! vLLM rocks!
@zhuohan123 FYI, I've just renamed the attention classes: |
hongxiayang
pushed a commit
to hongxiayang/vllm
that referenced
this pull request
Feb 13, 2024
sjchoi1
pushed a commit
to casys-kaist-internal/vllm
that referenced
this pull request
May 7, 2024
yukavio
pushed a commit
to yukavio/vllm
that referenced
this pull request
Jul 3, 2024
Summary: The 2024-03-25 nightly benchmarks failed due to performance regressions. We find that this is either due to, - the inherent flakiness in the benchmark experiment itself (experiments with small work loads), or - the inherent flakiness in the metrics. Please look at https://docs.google.com/document/d/1478BMToQIcpSCloiEWqmHoZVrVOZVV-1u4gCyqtjkKE/edit?usp=sharing for more details. Updates in this PR: - Serving case : Remove the 3000 num prompts at 10 qps experiments. - Serving case : Mark the p90, p99 statistics as "Observation" metrics so they dont trigger failure. - Engine case (benchmark_throughput.py) : Remove the 16 and 32 prefill cases. Test: Some local testing --------- Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The current plan is
vllm