Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: AttributeError: 'CachedChatGLM4Tokenizer' object has no attribute 'vocab' #9187

Closed
1 task done
LuckLittleBoy opened this issue Oct 9, 2024 · 1 comment · Fixed by #9188
Closed
1 task done
Labels
bug Something isn't working

Comments

@LuckLittleBoy
Copy link

Your current environment

The output of `python collect_env.py`
Your output of `python collect_env.py` here

Model Input Dumps

No response

🐛 Describe the bug

vllm version is v0.6.2

refer this doc https://docs.vllm.ai/en/v0.6.2/getting_started/cpu-installation.html, build from Dockerfile.cpu.

glm-4-9b-chat download from modelscope

request /v1/chat/completions debug log:
`INFO 10-09 08:57:11 logger.py:36] Received request chat-23a5dc203cc54f79851da3338a215f75: prompt: '[gMASK]<|user|>\n你是谁<|assistant|>', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.01, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=2000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [151331, 151333, 151336, 198, 103408, 99668, 151337], lora_request: None, prompt_adapter_request: None.
INFO: 192.168.10.77:51198 - "POST /v1/chat/completions HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 257, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 253, in wrap
await func()
File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 230, in listen_for_disconnect
message = await receive()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 555, in receive
await self.message_event.wait()
File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7ff694effb80

During handling of the above exception, another exception occurred:

  • Exception Group Traceback (most recent call last):
    | File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    | result = await app( # type: ignore[func-returns-value]
    | File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 60, in call
    | return await self.app(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in call
    | await super().call(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 113, in call
    | await self.middleware_stack(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 187, in call
    | raise exc
    | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 165, in call
    | await self.app(scope, receive, _send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 85, in call
    | await self.app(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in call
    | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    | raise exc
    | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    | await app(scope, receive, sender)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 715, in call
    | await self.middleware_stack(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 735, in app
    | await route.handle(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 288, in handle
    | await self.app(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 76, in app
    | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    | raise exc
    | File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    | await app(scope, receive, sender)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 74, in app
    | await response(scope, receive, send)
    | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 250, in call
    | async with anyio.create_task_group() as task_group:
    | File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 736, in aexit
    | raise BaseExceptionGroup(
    | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
    +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 253, in wrap
    | await func()
    | File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 242, in stream_response
    | async for chunk in self.body_iterator:
    | File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 287, in chat_completion_stream_generator
    | tool_parser: Optional[ToolParser] = self.tool_parser(
    | File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py", line 51, in init
    | self.tool_call_start_token_id: int = self.model_tokenizer.vocab[
    | AttributeError: 'CachedChatGLM4Tokenizer' object has no attribute 'vocab'
    +------------------------------------
    DEBUG 10-09 08:57:13 client.py:148] Heartbeat successful.
    `

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@DarkLight1337
Copy link
Member

DarkLight1337 commented Oct 9, 2024

Thanks for reporting this! I have opened #9188 to fix the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants