Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: 'ActorHandle' object has no attribute 'decoding_config' when setting --engine-use-ray #4317

Closed
syGOAT opened this issue Apr 24, 2024 · 5 comments · Fixed by #4335
Closed
Assignees
Labels
bug Something isn't working

Comments

@syGOAT
Copy link

syGOAT commented Apr 24, 2024

Your current environment

PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31

Python version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20

Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      52 bits physical, 57 bits virtual
CPU(s):                             180
On-line CPU(s) list:                0-179
Thread(s) per core:                 2
Core(s) per socket:                 45
Socket(s):                          2
NUMA node(s):                       2
Vendor ID:                          GenuineIntel
CPU family:                         6
Model:                              143
Model name:                         Intel(R) Xeon(R) Platinum 8457C
Stepping:                           8
CPU MHz:                            2600.000
BogoMIPS:                           5200.00
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          4.2 MiB
L1i cache:                          2.8 MiB
L2 cache:                           180 MiB
L3 cache:                           195 MiB
NUMA node0 CPU(s):                  0-89
NUMA node1 CPU(s):                  90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Unknown: No mitigations
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; TSX disabled
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] torch==2.2.1
[pip3] triton==2.2.0
[pip3] vllm-nccl-cu12==2.18.1.0.3.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.19.3                   pypi_0    pypi
[conda] torch                     2.2.1                    pypi_0    pypi
[conda] triton                    2.2.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.3.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU1    SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU2    SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU3    SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU4    SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     90-179  1               N/A
GPU5    SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     90-179  1               N/A
GPU6    SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     90-179  1               N/A
GPU7    SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     90-179  1               N/A
NIC0    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

🐛 Describe the bug

I used this command to run Meta-Llama-3-70B-Instruct:

python -m vllm.entrypoints.openai.api_server --model /root/autodl-tmp/model/Meta-Llama-3-70B-Instruct --tensor-paralle
l-size 4 --port 8000 --served-model-name gpt-4 --engine-use-ray

Server was started successfully:

# ..........
INFO:     Started server process [760244]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

I requested the server with this body:

{
    "model": "gpt-4",
   "messages": [
    {
      "role": "system",
      "content": "Always response in Chinese, not English."
    },
    {
      "role": "user",
      "content": "在客厅,有一个盘子放在香蕉上面,把盘子带到餐厅,香蕉在哪里"
    }
  ],
    "max_tokens": 150
}

But I got an error from the vllm server:

INFO:     101.126.64.120:0 - "POST /v1/chat/completions HTTP/1.0" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
    response = await func(request)
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/root/autodl-tmp/feng_workspace/vllm/vllm/entrypoints/openai/api_server.py", line 89, in create_chat_completion
    generator = await openai_serving_chat.create_chat_completion(
  File "/root/autodl-tmp/feng_workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 71, in create_chat_completion
    decoding_config = self.engine.engine.decoding_config
  File "/root/autodl-tmp/miniconda3/envs/vllm0.4/lib/python3.10/site-packages/ray/actor.py", line 1472, in __getattr__
    raise AttributeError(
AttributeError: 'ActorHandle' object has no attribute 'decoding_config'
INFO:     101.126.64.120:0 - "POST /v1/chat/completions HTTP/1.0" 500 Internal Server Error

It is sililary as #3517, but my vLLM version is 0.4.1+cu122, built from a37d815b83849b5a96a182929dd6f3bd35f68fb8. The issue probably could not be raised again.

@syGOAT syGOAT added the bug Something isn't working label Apr 24, 2024
@bks5881
Copy link

bks5881 commented Apr 24, 2024

Same problem same version. #4291

@Uhao-P
Copy link

Uhao-P commented Apr 24, 2024

I encountered the same problem

@tachyean
Copy link

Same for me

@bks5881
Copy link

bks5881 commented Apr 24, 2024

Although if i remove the argument --engine-use-ray, it works

@eliran89c
Copy link

Same issue, different model (I tried Mistral-7b and llama-8b)
The same client code works when running the model on a ray head node (no --engine-use-ray flag)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants