Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RHOAIENG-10023: Adding procedure for Speculative Decoding and Multi-Modal Inferencing #406
RHOAIENG-10023: Adding procedure for Speculative Decoding and Multi-Modal Inferencing #406
Changes from all commits
409961f
325edbb
69aa5b8
cb2f82c
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth it to mention that the authorization header is only required if adding
--api-key
in the vllm commandline arguments (InferenceService
orServingRuntime
):from
vllm --help
(or see tSee https://docs.vllm.ai/en/v0.5.4/serving/openai_compatible_server.html and/or https://docs.vllm.ai/en/v0.5.4/serving/env_vars.html#environment-variables
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really. This authorization is intended for Authernio token, not for the vLLM API key. We don’t provide documentation for the API key on our VLLM server, so I don’t think it needs to be included here api key in our doc and have the doc as it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also add an example with the
/v1/completions
endpoint. In that caseThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dtrifiro AFAIK
:443/v1/completions
does not handle image url or anything in such, So based on that i don't think we can add completion endpointsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well it's there and part of the OpenAI API spec, although it's being deprecated: https://platform.openai.com/docs/guides/completions
Since it should be equivalent to the chat API (abeit a bit simpler), I guess we can leave it out