Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: The Offline Inference Embedding Example Fails #5181

Closed
cuizhuyefei opened this issue Jun 1, 2024 · 6 comments · Fixed by #5184
Closed

[Bug]: The Offline Inference Embedding Example Fails #5181

cuizhuyefei opened this issue Jun 1, 2024 · 6 comments · Fixed by #5184
Labels
bug Something isn't working

Comments

@cuizhuyefei
Copy link

Your current environment

Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-106-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX 6000 Ada Generation
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX 6000 Ada Generation

Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      43 bits physical, 48 bits virtual
CPU(s):                             112
On-line CPU(s) list:                0-111
Thread(s) per core:                 2
Core(s) per socket:                 28
Socket(s):                          2
NUMA node(s):                       8
Vendor ID:                          AuthenticAMD
CPU family:                         25
Model:                              1
Model name:                         AMD EPYC 7453 28-Core Processor
Stepping:                           1
Frequency boost:                    enabled
CPU MHz:                            1500.000
CPU max MHz:                        3488.5249
CPU min MHz:                        1500.0000
BogoMIPS:                           5499.98
Virtualization:                     AMD-V
L1d cache:                          1.8 MiB
L1i cache:                          1.8 MiB
L2 cache:                           28 MiB
L3 cache:                           128 MiB
NUMA node0 CPU(s):                  0-6,56-62
NUMA node1 CPU(s):                  7-13,63-69
NUMA node2 CPU(s):                  14-20,70-76
NUMA node3 CPU(s):                  21-27,77-83
NUMA node4 CPU(s):                  28-34,84-90
NUMA node5 CPU(s):                  35-41,91-97
NUMA node6 CPU(s):                  42-48,98-104
NUMA node7 CPU(s):                  49-55,105-111
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchvision==0.18.0
[pip3] triton==2.3.0
[conda] No relevant packagesROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     NV4     SYS     21-27,77-83     3               N/A
GPU1    SYS      X      SYS     SYS     14-20,70-76     2               N/A
GPU2    NV4     SYS      X      SYS     49-55,105-111   7               N/A
GPU3    SYS     SYS     SYS      X      35-41,91-97     5               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

With the latest vllm-0.4.3 installed, when running the official example offline inference embedding code https://docs.vllm.ai/en/stable/getting_started/examples/offline_inference_embedding.html in this line outputs = model.encode(prompts), I get the following errors:

[rank0]: Traceback (most recent call last):
[rank0]:   File "<stdin>", line 1, in <module>
[rank0]:   File "home/lib/python3.10/site-packages/vllm/utils.py", line 672, in inner
[rank0]:     return fn(*args, **kwargs)
[rank0]:   File "home/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 444, in encode
[rank0]:     outputs = self._run_engine(use_tqdm=use_tqdm)
[rank0]:   File "home/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 552, in _run_engine
[rank0]:     step_outputs = self.llm_engine.step()
[rank0]:   File "home/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 772, in step
[rank0]:     output = self.model_executor.execute_model(
[rank0]:   File "home/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 91, in execute_model
[rank0]:     output = self.driver_worker.execute_model(execute_model_req)
[rank0]:   File "home/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:   File "home/lib/python3.10/site-packages/vllm/worker/worker.py", line 272, in execute_model
[rank0]:     output = self.model_runner.execute_model(seq_group_metadata_list,
[rank0]:   File "home/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:   File "home/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 707, in execute_model
[rank0]:     ) = self.prepare_input_tensors(seq_group_metadata_list)
[rank0]:   File "home/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 654, in prepare_input_tensors
[rank0]:     sampling_metadata = SamplingMetadata.prepare(
[rank0]:   File "home/lib/python3.10/site-packages/vllm/model_executor/sampling_metadata.py", line 116, in prepare
[rank0]:     ) = _prepare_seq_groups(seq_group_metadata_list, seq_lens, query_lens,
[rank0]:   File "home/lib/python3.10/site-packages/vllm/model_executor/sampling_metadata.py", line 210, in _prepare_seq_groups
[rank0]:     if sampling_params.seed is not None:
[rank0]: AttributeError: 'NoneType' object has no attribute 'seed'
@cuizhuyefei cuizhuyefei added the bug Something isn't working label Jun 1, 2024
@robertgshaw2-neuralmagic
Copy link
Collaborator

I just ran the example and did not see this issue

What model are you using? This error can occur if you call .encode on a XXXForCausalLM.

@Delviet
Copy link
Contributor

Delviet commented Jun 1, 2024

Interestingly enough, for me example is working fine and I actually see the example results (list of numbers) in my CLI.

Moreover, your error message states:

...
[rank0]:   File "home/lib/python3.10/site-packages/vllm/model_executor/sampling_metadata.py", line 210, in _prepare_seq_groups
[rank0]:     if sampling_params.seed is not None:
[rank0]: AttributeError: 'NoneType' object has no attribute 'seed'

The problem is that if sampling_params.seed is not None: is line 208 (not 210) in current version of the file. It seems like you could have modified the file somehow and then it stopped working.

Hope it could help you somehow.

@cuizhuyefei
Copy link
Author

Thanks for all of your help!
I have indeed tried to modify the source code after encountering this error. I've changed back to the original code (though no change of functionality).

Interestingly, the script works well with intfloat/e5-mistral-7b-instruct. After changing the model to mistralai/Mistral-7B-Instruct-v0.2, I got the error mentioned earlier. Do you have suggestions how I can use for this specific model? Really appreciate your help!

@robertgshaw2-neuralmagic
Copy link
Collaborator

robertgshaw2-neuralmagic commented Jun 1, 2024

  • mistralai/Mistral-7B-Instruct-v0.2 is an XXXForCausalLM model. CausalLM means that it generates text. It should not be used for embeddings. --> see the config:
{
  "architectures": [
    "MistralForCausalLM" # << this tells us its a generation model
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 1,
  "eos_token_id": 2,
  "hidden_act": "silu",
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 14336,
  "max_position_embeddings": 32768,
  "model_type": "mistral",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 8,
  "rms_norm_eps": 1e-05,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.36.0",
  "use_cache": true,
  "vocab_size": 32000
}
  • intfloat/e5-mistral-7b-instruct is an XXModel. This means that the model just generates embeddings. It should be used for embeddings --> see the config:
{
  "_name_or_path": "mistralai/Mistral-7B-v0.1",
  "architectures": [
    "MistralModel"    # <<< this tells us its an embedding model
  ],
  "bos_token_id": 1,
  "eos_token_id": 2,
  "hidden_act": "silu",
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 14336,
  "max_position_embeddings": 32768,
  "model_type": "mistral",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 8,
  "pad_token_id": 2,
  "rms_norm_eps": 1e-05,
  "rope_theta": 10000.0,
  "sliding_window": 4096,
  "tie_word_embeddings": false,
  "torch_dtype": "float16",
  "transformers_version": "4.34.0",
  "use_cache": false,
  "vocab_size": 32000
}

We automatically detect if the model is an embedding or a generation model based on these config. Supporting embedding models is a new feature. Thank you for bringing this bad UX to my attention.

I am going to update to:

  • log a better error message
  • make some documentation to help users understand how to use this better

@cuizhuyefei
Copy link
Author

I get it. Thanks for explaining this!

@zankner
Copy link

zankner commented Jun 8, 2024

For what its worth I think people might want to use causal lm to generate embeddings of just the prompt, at least thats the use case I currently have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants