Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC]: Merge input processor and input mapper for multi-modal models #10114

Open
10 of 18 tasks
Tracked by #4194
DarkLight1337 opened this issue Nov 7, 2024 · 8 comments
Open
10 of 18 tasks
Tracked by #4194
Labels

Comments

@DarkLight1337
Copy link
Member

DarkLight1337 commented Nov 7, 2024

Motivation

Background

To provide more control over the model inputs, we currently define two methods for multi-modal models in vLLM:

  • The input processor is called inside LLMEngine to extend the prompt with placeholder tokens which are reserved for vLLM features such as KV cache and chunked prefill.
  • The input mapper is called inside ModelRunner to transform multi-modal inputs (e.g. PIL images) into tensor inputs, usually via the modality-specific processor (e.g. AutoImageProcessor) from HuggingFace.

Issues with the current design

  1. The input processor accepts the output of HF AutoTokenizer, a list of token IDs, instead of the text prompt. Since HF AutoProcessor doesn’t accept token IDs, we have to write custom code to edit the list of token IDs based on the multi-modal inputs. For some models (such as Phi-3-vision), this means re-implementing code from their HF AutoProcessor, complicating the process of porting the model to vLLM.
  2. The input mapper, being inside ModelRunner, lies on the critical path of vLLM’s model execution. Even when the input mapper is fast, the tail TTFT and TPOT suffers because of this. As the input mapper takes up more time, our overall throughput decreases proportionally which can be avoided if we move it outside of the critical path. Nevertheless, we can do little if the AutoProcessor inside input mapper is very slow, like in #9238. Hope that huggingface/transformers#33810 can help with that!
  3. This abstraction results in redundant processing for models (such as Qwen2-VL and Molmo) with HF AutoProcessor that already performs most of the work for calculating the number of placeholder tokens.

Proposed Change

Unified multi-modal processor

We plan to merge our input processor and input mapper into a unified multi-modal processor and call it inside the LLMEngine (and thus benefit from #8779), taking the role of the existing tokenizer. After this change, each input type will be processed as follows:

  • Text-only prompt: Pass to vLLM tokenizer (wraps HF AutoTokenizer) [Unchanged]
  • List of token IDs: Skip vLLM tokenizer [Unchanged]
  • Text prompt with multi-modal input: Pass to vLLM multi-modal processor (wraps HF AutoProcessor) [NEW]
  • List of token IDs with multi-modal input: [DEPRECATED, see below]

This multi-modal processor will first call HF AutoProcessor, and then modify the processed token IDs by inserting placeholder tokens. (These processed token IDs are not to be confused with the deprecated “list of token IDs with multi-modal input", in which “list of token IDs" represents the tokenized text before processing with multi-modal input.) The number of placeholder tokens to assign can be determined by the existing feature size calculations for each model.

Deprecate token IDs with multi-modal input

To be compatible with OpenAI’s (legacy) Completions API, we currently support passing token IDs directly to both LLM class and OpenAI-compatible server. However, Completions API doesn’t support multi-modal inputs, so we will deprecate passing token IDs alongside multi-modal inputs to simplify model implementation (see Issue 1 above). Please tell us if you have a use case for this and don’t want to see it removed!

Feedback Period

Feel free to comment as the effort progresses!

Timeline

The majority of our code will be called inside the existing InputPreprocessor which is separated from the vLLM engine, making it easy to integrate with #8779.

CC List

@ywang96 @Isotr0py @WoosukKwon @robertgshaw2-neuralmagic

Any Other Things

Multi-modal plugins remain supported

You can define additional modalities in MultiModalProcessingMetadata to handle your custom multi-modal plugins. If the names of those modalities are not valid keyword arguments to HF AutoProcessor, you can override the default multi-modal processor (similar to how you currently need to define _default_input_mapper for multi-modal plugins).

Some users currently use multi-modal plugins to directly pass custom model inputs (#6260). We can implement an alternative process_multimodal to help them migrate to the new processing framework.

No batched preprocessing for now

Currently, preprocessing is performed per prompt in vLLM. While we can call HF tokenizer and modality-specific processor on batched inputs separately, calling the wrapping HF AutoProcessor with both list of texts and list of multi-modal data results in the processed multi-modal data (e.g. image) being assigned to every text in the list, rather than the more intuitive zip-like behavior (e.g. the ith image only assigned to the ith text). To support batched preprocessing, we would have to write custom code for each model to combine the outputs of HF tokenizer and modality-specific processors. Given that this can significantly complicate model implementation (see Issue 1 above), we will not consider batched preprocessing at this stage, even with this change.

Processor caching

#11396 caches each item in the multi-modal output of HF processor and links them back to items in the input data.

When new data is passed in, we first check which items are in the cache, and which ones are missing. The missing items are passed into the HF processor in a single batch and cached, before being merged with the existing items in the cache.

On the other hand, the text is tokenized on its own. We use automatic placeholder replacement to insert multi-modal placeholders into the tokenized text so that it remains consistent with the multi-modal data. This should work for most HF processors without modification.

Finally, we combine the tokenized text and multi-modal data to form the overall processed data.

@robertgshaw2-neuralmagic
Copy link
Collaborator

This is great. In the EngineCore/AsyncLLM refactor (#9826), we introduced the concept of a Processor. I think that this code should sit inside there.

You initiative here will fit very well with the EngineCore/AsyncLLM refactor --- since the Processor runs in process 0, while the EngineCore runs in process 1. This means that we can overlap the input processing with the model execution (which is not currently true since the input processing currently runs in ModelRunner, which is part of Engine core.

One other note. The Processor in the PR linked currently runs inside process 0. However, we made the APIs such that we can adjust the Processor to run N background processes if needed. So, if you can work within this class, we can have a nice separation of concerns, which will enable us to offload more things to background processes as need.

Very excited about this!

@mlinmg
Copy link

mlinmg commented Nov 29, 2024

I would like to discuss an edge case where passing the input ids and the MultiModal args is rather useful.
My use case is that I have implemented a general TTS engine using vllm as the backbone for the decoder model, in tts you have essentially 2 dictionaries, one is mapped with a tokenizer(the text one) and one which is for "audio tokens" that aren't mapped to a dictionary. usually the decoder model generates tokens that are not mappable with the text tokenizer. since both dictionaries have different bos and eos tokens it is rather complex to uniform the preprocessing and it is much easier to just do it manually (https://github.com/astramind-ai/Auralis/blob/main/src/auralis/models/xttsv2/XTTSv2.py and https://github.com/astramind-ai/Auralis/blob/main/src/auralis/models/xttsv2/components/vllm_mm_gpt.py)

@mlinmg
Copy link

mlinmg commented Nov 29, 2024

maybe a solution is explicitly including a supeclass in the model definition which will allow such behavior, otherwise deprecating it?

@DarkLight1337
Copy link
Member Author

Maybe we can make a special case and allow token IDs if all other inputs aren't processed by HF.

@jnordberg
Copy link

INFO 12-06 07:56:21 preprocess.py:215] Your model uses the legacy input pipeline instead of the new multi-modal processor. Please note that the legacy pipeline will be removed in a future release. For more details, see: https://github.com/vllm-project/vllm/issues/10114

This seems a bit premature since this new multi-modal processor isn't even usable yet

@DarkLight1337
Copy link
Member Author

INFO 12-06 07:56:21 preprocess.py:215] Your model uses the legacy input pipeline instead of the new multi-modal processor. Please note that the legacy pipeline will be removed in a future release. For more details, see: https://github.com/vllm-project/vllm/issues/10114

This seems a bit premature since this new multi-modal processor isn't even usable yet

The purpose of that is to direct users to this RFC thread, so we can get more thoughts.

@jnordberg
Copy link

My thought is that the warning is very annoying 😀

Screenshot 2024-12-06 at 17 03 58

@DarkLight1337
Copy link
Member Author

DarkLight1337 commented Dec 6, 2024

Sorry for the spam, it has been fixed in #10530 so the message is now only logged once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants