Skip to content

Commit 5e00fc7

Browse files
pcmoritzdtrifiro
authored andcommitted
[Minor] Fix small typo in llama.py: QKVParallelLinear -> QuantizationConfig (vllm-project#4991)
1 parent 7cf54ef commit 5e00fc7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/model_executor/models/llama.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ def __init__(
5757
hidden_size: int,
5858
intermediate_size: int,
5959
hidden_act: str,
60-
quant_config: Optional[QKVParallelLinear] = None,
60+
quant_config: Optional[QuantizationConfig] = None,
6161
bias: bool = False,
6262
) -> None:
6363
super().__init__()

0 commit comments

Comments
 (0)