Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support serializing and deserializing LoRA adapters #3

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

sangstar
Copy link
Collaborator

@sangstar sangstar commented Dec 9, 2024

Serializing and deserializing LoRA adapters using tensorizer

This PR allows LoRA adapters to be serialized and deserialized using tensorizer. A test is added to confirm this and serializing LoRA files can be done with the same script that can save vLLM models in examples/tensorize_vllm_model.py.

Summary of changes

  • Adjusted the CLI args for .buildkite/test-pipeline.yaml's invocation of the tensorize_vllm_model.py example script to additionally save and load a LoRA adapter, testing a model generation after deserialization to confirm the LoRA adapter was loaded properly
  • Added --lora-path to tensorize_vllm_model.py as a base argparser argument that allows a user to specify the HuggingFace reference ID for a LoRA adapter, which can be either serialized or deserialized depending on whether the serialize or deserialize subparser is indicated.
  • Added test_serialize_and_deserialize_lora to test_tensorizer.py testing saving and loading LoRA adapters with tensorizer.
  • Allows deserializing a LoRA adapter by allowing TensorizerConfig to be passed in as a kwarg to LoRARequest. When this is done, the LoRA tensors are assumed to be in tensorizer format and deserialized according to the parameters given in the TensorizerConfig provided.

Note: Noticed a few things that need hotfixes after putting up this PR. Stamped TODOs on these and working on them now while waiting for reviews. If you still see any TODO's (which means I haven't addressed them yet) please do feel free to comment on them.

NB: In addition, an extra planned change will be added to their test pipeline, which will have OpenAI API inference engine tensorizer support tests to ensure these can be part of fast-check for forward-compatibility. As such, the state of this PR is not final and will warrant a further review once that piece is added (may be more appropriate to draft this but a review is warranted on the current state). This has been added in cc3f984

Copy link

github-actions bot commented Dec 9, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@sangstar sangstar requested review from wbrown and Eta0 December 9, 2024 21:10
@sangstar sangstar force-pushed the sangstar/add-tensorizer-lora branch from e227e36 to f7c0f8c Compare December 11, 2024 14:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant