Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AsyncEngineDeadError / RuntimeError: CUDA error: an illegal memory access was encountered #1001

Closed
xingyaoww opened this issue Sep 9, 2023 · 10 comments

Comments

@xingyaoww
Copy link

xingyaoww commented Sep 9, 2023

While serving the CodeLLaMA 13B (CodeLlama-13b-hf) base model with v1/completions API with 1 A100, I encountered the following CUDA memory issue.
The same thing happened with the 34B base model, too (CodeLlama-34b-hf). However, I did not encounter such an issue with any of the CodeLlama instruct series (with the same starting config).

To make it easier to debug, I attached the complete log here (it is too big, so i have to upload it somewhere else).

The error log:

INFO 09-09 08:20:08 async_llm_engine.py:120] Aborted request cmpl-223dc522668143dfb7db9b23988ec0a1.
INFO:     127.0.0.1:34054 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
Exception in callback _raise_exception_on_finish(request_tracker=<vllm.engine....x7f85d0660160>)(<Task finishe...sertions.\n')>) at /usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py:21
handle: <Handle _raise_exception_on_finish(request_tracker=<vllm.engine....x7f85d0660160>)(<Task finishe...sertions.\n')>) at /usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py:21>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 27, in _raise_exception_on_finish
    task.result()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 315, in run_engine_loop
    await self.engine_step()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 300, in engine_step
    request_outputs = await self.engine.step_async()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 173, in step_async
    output = await self._run_workers_async(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 198, in _run_workers_async
    output = executor(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 289, in execute_model
    input_tokens, input_positions, input_metadata = self._prepare_inputs(
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 231, in _prepare_inputs
    tokens_tensor = torch.cuda.LongTensor(input_tokens)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
    self._context.run(self._callback, *self._args)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 36, in _raise_exception_on_finish
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 31, in _raise_exception_on_finish
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 27, in _raise_exception_on_finish
    task.result()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 315, in run_engine_loop
    await self.engine_step()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 300, in engine_step
    request_outputs = await self.engine.step_async()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 173, in step_async
    output = await self._run_workers_async(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 198, in _run_workers_async
    output = executor(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 289, in execute_model
    input_tokens, input_positions, input_metadata = self._prepare_inputs(
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 231, in _prepare_inputs
    tokens_tensor = torch.cuda.LongTensor(input_tokens)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.8/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/applications.py", line 292, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/usr/local/lib/python3.8/dist-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 273, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.8/dist-packages/fastapi/routing.py", line 190, in run_endpoint_function
    return await dependant.call(**values)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/api_server.py", line 528, in create_completion
    async for res in result_generator:
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 387, in generate
    raise e
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 382, in generate
    async for request_output in stream:
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 69, in __anext__
    raise result
  File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
    self._context.run(self._callback, *self._args)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 36, in _raise_exception_on_finish
    raise exc
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 31, in _raise_exception_on_finish
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.

Here is the script and the docker container (with vllm==0.1.5) i used to spin up the server.

export HOST_USER_ID=$(id -u)
DOCKER_IMG="xingyaoww/vllm:v1.1.1"

# Construct instance name using the current username and the current time.
# This is useful for running multiple instances of the docker container.
DOCKER_INSTANCE_NAME="vllm_${USER}_$(date +%Y%m%d_%H%M%S)"

# Model directory: contains model (cloned) downloaded from huggingface
# 1. git lfs install
# 2. git clone git@hf.co:<MODEL ID> # example: git clone git@hf.co:meta-llama/Llama-2-13b-chat-hf
MODEL_DIR="." # e.g., dir that contains Llama-2-13b-chat-hf
MODEL_NAME="CodeLlama-13b-hf"

# Set CUDA_VISIBLE_DEVICES to the GPU ids you want to use.
# If you have multiple GPUs, you can use this to control which GPUs are used.
export N_GPUS=1
export CUDA_VISIBLE_DEVICES=3

docker run \
    -e CUDA_VISIBLE_DEVICES \
    -v $MODEL_DIR:/home/vllm/model/ \
    --net=host --rm --gpus all \
    --shm-size=10.24gb \
    --name $DOCKER_INSTANCE_NAME \
    $DOCKER_IMG \
    bash -c "
    useradd --shell /bin/bash -u $HOST_USER_ID -o -c "" -m vllm; su vllm;
    python3 -m vllm.entrypoints.openai.api_server \
    --model /home/vllm/model/$MODEL_NAME \
    --tensor-parallel-size $N_GPUS \
    --served-model-name $MODEL_NAME \
    --max-num-batched-tokens 16384 \
    --load-format pt \
    --port 8005
    "
@esmeetu
Copy link
Collaborator

esmeetu commented Sep 10, 2023

I may also encounter this problem when generation tokens exceed 4096. Also the model start output gribbish. Might gribbish output let the kernel be unstable. Currently i limit max_tokens to 4096, and no more error.

@michaelroyzen
Copy link

michaelroyzen commented Sep 13, 2023

We have this same issue and we're only trying to generate 1024 tokens. It's extremely frustrating. @WoosukKwon

@c3-avidmych
Copy link

c3-avidmych commented Sep 13, 2023

I am seeing this error when running:

  • vllm v0.1.7
  • A100 40Gb
  • model: codellama/CodeLlama-13b-Instruct-hf
  • --max-num-batched-tokens 9000 or larger
  • prompt size starting around 8500

After I get this error the first time, it throws the same error on small prompts as well, until I restart.
So I am forced to set --max-num-batched-tokens to 8129.

Any ideas how to work around this error?

@michaelroyzen
Copy link

Any updates @WoosukKwon? This bug is causing us problems in production.

@Cyberes
Copy link

Cyberes commented Sep 27, 2023

I am encountering a similar issue on an A100 80G and I believe it has something to do with --max-num-batched-tokens.

The stack trace is a bit different:

INFO 09-27 23:24:07 llm_engine.py:72] Initializing an LLM engine with config: model='/storage/vllm/models/Xwin-LM-70B-V0.1-AWQ', tokenizer='/storage/vllm/models/Xwin-LM-70B-V0.1-AWQ', tokenizer_mode=auto, revision=None, trust_remote_code=False, dtype=torch.float16, download_dir=None, load_format=auto, tensor_parallel_size=1, quantization=awq, seed=0)
Traceback (most recent call last):
  File "/local-llm-server/other/vllm/vllm_api_server.py", line 103, in <module>
    engine = AsyncLLMEngine.from_engine_args(engine_args)
  File "/venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 486, in from_engine_args
    engine = cls(engine_args.worker_use_ray,
  File "/venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 270, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 306, in _init_engine
    return engine_class(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 108, in __init__
    self._init_cache()
  File "/venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 188, in _init_cache
    num_blocks = self._run_workers(
  File "/venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 688, in _run_workers
    output = executor(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 108, in profile_num_available_blocks
    self.model(
  File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 293, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 253, in forward
    hidden_states = layer(
  File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 200, in forward
    hidden_states = self.self_attn(
  File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 151, in forward
    attn_output = self.attn(positions, q, k, v, k_cache, v_cache,
  File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/layers/attention.py", line 330, in forward
    return super().forward(
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/layers/attention.py", line 205, in forward
    self.multi_query_kv_attention(
  File "/venv/lib/python3.10/site-packages/vllm/model_executor/layers/attention.py", line 109, in multi_query_kv_attention
    key = torch.repeat_interleave(key, self.num_queries_per_kv, dim=1)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

--max-num-batched-tokens higher than 5120 seems to cause this exception. If I remember correctly, I got a similar issue on the A6000 but the max-num-batched-tokens was able to be set over 8000. I don't think I've ever encountered this issue on my A4000 and IIRC I had it at like 9999.

@Yard1
Copy link
Collaborator

Yard1 commented Oct 2, 2023

I believe this should have been fixed in the latest 0.2.0 release.

@RanchiZhao
Copy link

same bug

@robcaulk
Copy link
Contributor

Ran into this problem in 0.2.5 on A4500 card.

@WoosukKwon
Copy link
Collaborator

@robcaulk Could you share a reproducible script? Thanks.

@hmellor hmellor closed this as completed Mar 25, 2024
@TangJiakai
Copy link

Still happened in version 0.6.1.post2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests