Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] fix lora_dtype value type in arg_utils.py #5398

Merged
merged 1 commit into from
Jun 11, 2024

Conversation

c3-ali
Copy link
Contributor

@c3-ali c3-ali commented Jun 11, 2024

FILL IN THE PR DESCRIPTION HERE

FIX #5397

@simon-mo simon-mo enabled auto-merge (squash) June 11, 2024 00:56
Copy link
Collaborator

@rkooo567 rkooo567 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QQ: does it not support torch.dtype?

@zhuohan123 zhuohan123 disabled auto-merge June 11, 2024 05:50
@zhuohan123 zhuohan123 enabled auto-merge (squash) June 11, 2024 05:50
@c3-ali
Copy link
Contributor Author

c3-ali commented Jun 11, 2024

@rkooo567 It does! I followed the argparse options of choices=['auto', 'float16', 'bfloat16', 'float32'] and used str but it seems LoraConfig specify it as lora_dtype: Optional[torch.dtype] = None and the implementation supports both str and torch.dtype.

    def verify_with_model_config(self, model_config: ModelConfig):
        if self.lora_dtype in (None, "auto"):
            self.lora_dtype = model_config.dtype
        elif isinstance(self.lora_dtype, str):
            self.lora_dtype = getattr(torch, self.lora_dtype)

So lora_dtype: Optional[Union[str, torch.dtype]] = 'auto' is a precise definition. I'm going to make that change.

@simon-mo simon-mo disabled auto-merge June 11, 2024 17:40
@simon-mo simon-mo merged commit 00e6a2d into vllm-project:main Jun 11, 2024
100 of 103 checks passed
joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 27, 2024
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 8, 2024
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
Temirulan pushed a commit to Temirulan/vllm-whisper that referenced this pull request Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: EngineArgs missing value type for lora_dtype
3 participants