Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] <title> 启动Qanything-api服务报错 #478

Open
2 tasks done
GOOD-N-LCM opened this issue Aug 21, 2024 · 0 comments
Open
2 tasks done

[BUG] <title> 启动Qanything-api服务报错 #478

GOOD-N-LCM opened this issue Aug 21, 2024 · 0 comments

Comments

@GOOD-N-LCM
Copy link

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

运行 bash scripts/run_for_openai_api_with_cpu_in_Linux_or_WSL.sh 脚本命令启动,但是会有以下报错:

RuntimeError: CUDA error: unspecified launch failure
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

期望行为 | Expected Behavior

没启动成功过...

运行环境 | Environment

- OS: Ubuntu 22.04
- NVIDIA Driver:535.183.01
- CUDA:12.2
- docker:
- docker-compose:
- NVIDIA GPU:NVIDIA GeForce RTX 2080 Ti
- NVIDIA GPU Memory: 22GB

QAnything日志 | QAnything logs

You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565
[2024-08-21 11:19:12 +0800] [1743393] [ERROR] Experienced exception while trying to serve
Traceback (most recent call last):
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/mixins/startup.py", line 958, in serve_single
worker_serve(monitor_publisher=None, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/worker/serve.py", line 143, in worker_serve
raise e
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/worker/serve.py", line 117, in worker_serve
return _serve_http_1(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/server/runners.py", line 223, in _serve_http_1
loop.run_until_complete(app._server_event("init", "before"))
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/app.py", line 1764, in _server_event
await self.dispatch(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/signals.py", line 208, in dispatch
return await dispatch
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/signals.py", line 183, in _dispatch
raise e
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/signals.py", line 167, in _dispatch
retval = await maybe_coroutine
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/app.py", line 1315, in _listener
await maybe_coro
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/qanything_server/sanic_api.py", line 200, in init_local_doc_qa
local_doc_qa.init_cfg(args=args)
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/core/local_doc_qa.py", line 61, in init_cfg
self.llm: OpenAICustomLLM = OpenAICustomLLM(args)
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/connector/llm/llm_for_fastchat.py", line 40, in init
self.engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 500, in from_engine_args
engine = cls(parallel_config.worker_use_ray,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 273, in init
self.engine = self._init_engine(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 318, in _init_engine
return engine_class(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 114, in init
self._init_cache()
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 262, in _init_cache
num_blocks = self._run_workers(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 795, in _run_workers
driver_worker_output = getattr(self.driver_worker,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/worker/worker.py", line 96, in profile_num_available_blocks
self.model_runner.profile_run()
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 494, in profile_run
self.execute_model(seqs, kv_caches)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 461, in execute_model
output = self.model.sample(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 295, in sample
next_tokens = self.sampler(self.lm_head.weight, hidden_states,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/model_executor/layers/sampler.py", line 79, in forward
logits = _apply_top_p_top_k(logits, sampling_tensors.top_ps,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/model_executor/layers/sampler.py", line 196, in _apply_top_p_top_k
probs_sort = logits_sort.softmax(dim=-1)
RuntimeError: CUDA error: unspecified launch failure
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

[2024-08-21 11:19:12 +0800] [1743393] [INFO] Server Stopped
Traceback (most recent call last):
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/qanything_server/sanic_api.py", line 241, in
app.run(host=args.host, port=args.port, single_process=True, access_log=False)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/mixins/startup.py", line 215, in run
serve(primary=self) # type: ignore
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/mixins/startup.py", line 958, in serve_single
worker_serve(monitor_publisher=None, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/worker/serve.py", line 143, in worker_serve
raise e
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/worker/serve.py", line 117, in worker_serve
return _serve_http_1(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/server/runners.py", line 223, in _serve_http_1
loop.run_until_complete(app._server_event("init", "before"))
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/app.py", line 1764, in _server_event
await self.dispatch(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/signals.py", line 208, in dispatch
return await dispatch
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/signals.py", line 183, in _dispatch
raise e
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/signals.py", line 167, in _dispatch
retval = await maybe_coroutine
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/sanic/app.py", line 1315, in _listener
await maybe_coro
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/qanything_server/sanic_api.py", line 200, in init_local_doc_qa
local_doc_qa.init_cfg(args=args)
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/core/local_doc_qa.py", line 61, in init_cfg
self.llm: OpenAICustomLLM = OpenAICustomLLM(args)
File "/home/ta/space/models/wangyi_QAnything/QAnything/qanything_kernel/connector/llm/llm_for_fastchat.py", line 40, in init
self.engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 500, in from_engine_args
engine = cls(parallel_config.worker_use_ray,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 273, in init
self.engine = self._init_engine(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 318, in _init_engine
return engine_class(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 114, in init
self._init_cache()
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 262, in _init_cache
num_blocks = self._run_workers(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 795, in _run_workers
driver_worker_output = getattr(self.driver_worker,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/worker/worker.py", line 96, in profile_num_available_blocks
self.model_runner.profile_run()
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 494, in profile_run
self.execute_model(seqs, kv_caches)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 461, in execute_model
output = self.model.sample(
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 295, in sample
next_tokens = self.sampler(self.lm_head.weight, hidden_states,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/model_executor/layers/sampler.py", line 79, in forward
logits = _apply_top_p_top_k(logits, sampling_tensors.top_ps,
File "/home/ta/anaconda3/envs/qanything/lib/python3.10/site-packages/vllm/model_executor/layers/sampler.py", line 196, in _apply_top_p_top_k
probs_sort = logits_sort.softmax(dim=-1)
RuntimeError: CUDA error: unspecified launch failure
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

复现方法 | Steps To Reproduce

1, bash scripts/run_for_3B_in_Linux_or_WSL.sh

备注 | Anything else?

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant