-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support serializing and deserializing LoRA adapters #3
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
e227e36
to
f7c0f8c
Compare
Serializing and deserializing LoRA adapters using
tensorizer
This PR allows LoRA adapters to be serialized and deserialized using
tensorizer
. A test is added to confirm this and serializing LoRA files can be done with the same script that can save vLLM models inexamples/tensorize_vllm_model.py
.Summary of changes
.buildkite/test-pipeline.yaml
's invocation of thetensorize_vllm_model.py
example script to additionally save and load a LoRA adapter, testing a model generation after deserialization to confirm the LoRA adapter was loaded properly--lora-path
totensorize_vllm_model.py
as a base argparser argument that allows a user to specify the HuggingFace reference ID for a LoRA adapter, which can be either serialized or deserialized depending on whether theserialize
ordeserialize
subparser is indicated.test_serialize_and_deserialize_lora
totest_tensorizer.py
testing saving and loading LoRA adapters withtensorizer
.TensorizerConfig
to be passed in as a kwarg toLoRARequest
. When this is done, the LoRA tensors are assumed to be intensorizer
format and deserialized according to the parameters given in theTensorizerConfig
provided.Note: Noticed a few things that need hotfixes after putting up this PR. Stamped
TODO
s on these and working on them now while waiting for reviews. If you still see anyTODO
's (which means I haven't addressed them yet) please do feel free to comment on them.NB: In addition, an extra planned change will be added to their test pipeline, which will have OpenAI API inference engineThis has been added in cc3f984tensorizer
support tests to ensure these can be part of fast-check for forward-compatibility. As such, the state of this PR is not final and will warrant a further review once that piece is added (may be more appropriate to draft this but a review is warranted on the current state).