diff --git a/docs/source/getting_started/installation.rst b/docs/source/getting_started/installation.rst index 0ac3e9a49870..b5a245b327d0 100644 --- a/docs/source/getting_started/installation.rst +++ b/docs/source/getting_started/installation.rst @@ -1,3 +1,5 @@ +.. _installation: + Installation ============ diff --git a/docs/source/getting_started/quickstart.rst b/docs/source/getting_started/quickstart.rst index 350036e27ed1..7e3064ac3314 100644 --- a/docs/source/getting_started/quickstart.rst +++ b/docs/source/getting_started/quickstart.rst @@ -1,30 +1,49 @@ +.. _quickstart: + Quickstart ========== -LLM ---- +This guide shows how to use vLLM to: + +* run offline batched inference on a dataset; +* build an API server for a large language model; +* start an OpenAI-compatible API server. + +Be sure to complete the :ref:`installation instructions ` before continuing with this guide. -Placeholder. +Offline Batched Inference +------------------------- + +We first show an example of using vLLM for offline batched inference on a dataset. In other words, we use vLLM to generate texts for a list of input prompts. + +Import ``LLM`` and ``SamplingParams`` from vLLM. The ``LLM`` class is the main class for running offline inference with vLLM engine. The ``SamplingParams`` class specifies the parameters for the sampling process. .. code-block:: python from vllm import LLM, SamplingParams - # Sample prompts. +Define the list of input prompts and the sampling parameters for generation. The sampling temperature is set to 0.8 and the nucleus sampling probability is set to 0.95. For more information about the sampling parameters, refer to the `class definition `_. + +.. code-block:: python + prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] - # Create a sampling params object. sampling_params = SamplingParams(temperature=0.8, top_p=0.95) - # Create an LLM. +Initialize vLLM's engine for offline inference with the ``LLM`` class and the `OPT-125M model `_. The list of supported models can be found at :ref:`supported models `. + +.. code-block:: python + llm = LLM(model="facebook/opt-125m") - # Generate texts from the prompts. The output is a list of RequestOutput objects - # that contain the prompt, generated text, and other information. +Call ``llm.generate`` to generate the outputs. It adds the input prompts to vLLM engine's waiting queue and executes the vLLM engine to generate the outputs with high throughput. The outputs are returned as a list of ``RequestOutput`` objects, which include all the output tokens. + +.. code-block:: python + outputs = llm.generate(prompts, sampling_params) # Print the outputs. @@ -32,3 +51,81 @@ Placeholder. prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") + + +The code example can also be found in `examples/offline_inference.py `_. + + +API Server +---------- + +vLLM can be deployed as an LLM service. We provide an example `FastAPI `_ server. Check `vllm/entrypoints/api_server.py `_ for the server implementation. The server uses ``AsyncLLMEngine`` class to support asynchronous processing of incoming requests. + +Start the server: + +.. code-block:: console + + $ python -m vllm.entrypoints.api_server + +By default, this command starts the server at ``http://localhost:8000`` with the OPT-125M model. + +Query the model in shell: + +.. code-block:: console + + $ curl http://localhost:8000/generate \ + $ -d '{ + $ "prompt": "San Francisco is a", + $ "use_beam_search": true, + $ "n": 4, + $ "temperature": 0 + $ }' + +See `examples/api_client.py `_ for a more detailed client example. + +OpenAI-Compatible Server +------------------------ + +vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. + +Start the server: + +.. code-block:: console + + $ python -m vllm.entrypoints.openai.api_server \ + $ --model facebook/opt-125m + +By default, it starts the server at ``http://localhost:8000``. You can specify the address with ``--host`` and ``--port`` arguments. The server currently hosts one model at a time (OPT-125M in the above command) and implements `list models `_ and `create completion `_ endpoints. We are actively adding support for more endpoints. + +This server can be queried in the same format as OpenAI API. For example, list the models: + +.. code-block:: console + + $ curl http://localhost:8000/v1/models + +Query the model with input prompts: + +.. code-block:: console + + $ curl http://localhost:8000/v1/completions \ + $ -H "Content-Type: application/json" \ + $ -d '{ + $ "model": "facebook/opt-125m", + $ "prompt": "San Francisco is a", + $ "max_tokens": 7, + $ "temperature": 0 + $ }' + +Since this server is compatible with OpenAI API, you can use it as a drop-in replacement for any applications using OpenAI API. For example, another way to query the server is via the ``openai`` python package: + +.. code-block:: python + + import openai + # Modify OpenAI's API key and API base to use vLLM's API server. + openai.api_key = "EMPTY" + openai.api_base = "http://localhost:8000/v1" + completion = openai.Completion.create(model="facebook/opt-125m", + prompt="San Francisco is a") + print("Completion result:", completion) + +For a more detailed client example, refer to `examples/openai_client.py `_. diff --git a/docs/source/index.rst b/docs/source/index.rst index 51ca86091c6d..ff51ae6264a6 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -1,6 +1,8 @@ Welcome to vLLM! ================ +vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLM). + Documentation ------------- diff --git a/vllm/outputs.py b/vllm/outputs.py index db487e696b8e..2e957a810cd4 100644 --- a/vllm/outputs.py +++ b/vllm/outputs.py @@ -4,6 +4,18 @@ class CompletionOutput: + """The output data of one completion output of a request. + + Args: + index: The index of the output in the request. + text: The generated output text. + token_ids: The token IDs of the generated output text. + cumulative_logprob: The cumulative log probability of the generated + output text. + logprobs: The log probabilities of the top probability words at each + position if the logprobs are requested. + finish_reason: The reason why the sequence is finished. + """ def __init__( self, @@ -11,7 +23,7 @@ def __init__( text: str, token_ids: List[int], cumulative_logprob: float, - logprobs: List[Dict[int, float]], + logprobs: Optional[List[Dict[int, float]]], finish_reason: Optional[str] = None, ) -> None: self.index = index @@ -34,7 +46,14 @@ def __repr__(self) -> str: class RequestOutput: + """The output data of a request to the LLM. + Args: + request_id: The unique ID of the request. + prompt: The prompt string of the request. + prompt_token_ids: The token IDs of the prompt. + outputs: The output sequences of the request. + """ def __init__( self, request_id: str,