Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker startup error. Some models cannot be used. #8204

Closed
5 tasks done
AAEE86 opened this issue Sep 10, 2024 · 3 comments
Closed
5 tasks done

Docker startup error. Some models cannot be used. #8204

AAEE86 opened this issue Sep 10, 2024 · 3 comments
Labels
🐞 bug Something isn't working 🤔 cant-reproduce We can not reproduce it or the information is limited

Comments

@AAEE86
Copy link
Contributor

AAEE86 commented Sep 10, 2024

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.8.0-beta1

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

Running migrations

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

INFO:matplotlib.font_manager:generated new fontManager

Preparing database migration...

Start database migration.

INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.

INFO  [alembic.runtime.migration] Will assume transactional DDL.

Database migration successful!

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

[2024-09-10 07:43:40 +0000] [1] [INFO] Starting gunicorn 22.0.0

[2024-09-10 07:43:40 +0000] [1] [INFO] Listening at: http://0.0.0.0:5001 (1)

[2024-09-10 07:43:40 +0000] [1] [INFO] Using worker: gevent

[2024-09-10 07:43:40 +0000] [30] [INFO] Booting worker with pid: 30

2024-09-10 07:44:47,371.371 INFO [Thread-3 (_generate_worker)] [_client.py:1026] - HTTP Request: POST https://open.bigmodel.cn/api/paas/v4/chat/completions "HTTP/1.1 200 OK"

Building prefix dict from the default dictionary ...

2024-09-10 07:44:49,381.381 DEBUG [Thread-6 (_retriever)] [__init__.py:113] - Building prefix dict from the default dictionary ...

Dumping model to file cache /tmp/jieba.cache

2024-09-10 07:44:49,941.941 DEBUG [Thread-6 (_retriever)] [__init__.py:146] - Dumping model to file cache /tmp/jieba.cache

Loading model cost 0.620 seconds.

2024-09-10 07:44:50,001.001 DEBUG [Thread-6 (_retriever)] [__init__.py:164] - Loading model cost 0.620 seconds.

Prefix dict has been built successfully.

2024-09-10 07:44:50,001.001 DEBUG [Thread-6 (_retriever)] [__init__.py:166] - Prefix dict has been built successfully.

Exception in thread Thread-4 (_retriever):

Traceback (most recent call last):

  File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner

    self.run()

  File "/usr/local/lib/python3.10/threading.py", line 953, in run

    self._target(*self._args, **self._kwargs)

  File "/app/api/core/rag/retrieval/dataset_retrieval.py", line 432, in _retriever

    documents = RetrievalService.retrieve(retrival_method=retrieval_model['search_method'],

  File "/app/api/core/rag/datasource/retrieval_service.py", line 90, in retrieve

    raise Exception(exception_message)

Exception: [xinference] Error: HTTPConnectionPool(host='192.168.1.9', port=9997): Max retries exceeded with url: /v1/rerank (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f207617ba30>: Failed to establish a new connection: [Errno 111] Connection refused'))

2024-09-10 07:44:50,260.260 INFO [Thread-3 (_generate_worker)] [_client.py:1026] - HTTP Request: POST https://api.siliconflow.cn/v1/rerank "HTTP/1.1 200 OK"

/app/api/core/workflow/nodes/llm/llm_node.py:378: DeprecationWarning: This method is deprecated, use `get` instead.

  variable_value = variable_pool.get_any(variable_selector.value_selector)

/app/api/core/workflow/nodes/llm/llm_node.py:415: DeprecationWarning: This method is deprecated, use `get` instead.

  context_value = variable_pool.get_any(node_data.context.variable_selector)

/app/api/core/workflow/nodes/llm/llm_node.py:557: DeprecationWarning: This method is deprecated, use `get` instead.

  conversation_id = variable_pool.get_any(['sys', SystemVariableKey.CONVERSATION_ID.value])

/app/api/core/workflow/nodes/llm/llm_node.py:114: DeprecationWarning: This method is deprecated, use `get` instead.

  query=variable_pool.get_any(['sys', SystemVariableKey.QUERY.value])

/app/api/core/workflow/utils/condition/processor.py:18: DeprecationWarning: This method is deprecated, use `get` instead.

  actual_value = variable_pool.get_any(

/app/api/.venv/lib/python3.10/site-packages/pydantic/main.py:1282: PydanticDeprecatedSince20: The `copy` method is deprecated; use `model_copy` instead. See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/

  warnings.warn(

2024-09-10 07:47:20,198.198 INFO [Thread-10 (_generate_worker)] [_client.py:1026] - HTTP Request: POST https://open.bigmodel.cn/api/paas/v4/chat/completions "HTTP/1.1 200 OK"

/app/api/.venv/lib/python3.10/site-packages/pydantic/main.py:1358: PydanticDeprecatedSince20: The `validate` method is deprecated; use `model_validate` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/

  warnings.warn(

/app/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py:46: DeprecationWarning: This method is deprecated, use `get` instead.

  variable = self.graph_runtime_state.variable_pool.get_any(node_data.query_variable_selector)

Exception in thread Thread-12 (_retriever):

Traceback (most recent call last):

  File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner

    self.run()

  File "/usr/local/lib/python3.10/threading.py", line 953, in run

    self._target(*self._args, **self._kwargs)

  File "/app/api/core/rag/retrieval/dataset_retrieval.py", line 432, in _retriever

    documents = RetrievalService.retrieve(retrival_method=retrieval_model['search_method'],

  File "/app/api/core/rag/datasource/retrieval_service.py", line 90, in retrieve

    raise Exception(exception_message)

Exception: [xinference] Error: HTTPConnectionPool(host='192.168.1.9', port=9997): Max retries exceeded with url: /v1/rerank (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2075fd0cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))

2024-09-10 07:47:22,103.103 INFO [Thread-10 (_generate_worker)] [_client.py:1026] - HTTP Request: POST https://api.siliconflow.cn/v1/rerank "HTTP/1.1 200 OK"

2024-09-10 07:49:09,575.575 ERROR [Thread-18 (_generate_worker)] [app_generator.py:232] - Unknown Error when generating

Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request

    self._validate_conn(conn)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn

    conn.connect()

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 652, in connect

    sock_and_verified = _ssl_wrap_socket_and_match_hostname(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 805, in _ssl_wrap_socket_and_match_hostname

    ssl_sock = ssl_wrap_socket(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket

    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl

    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 121, in wrap_socket

    return self.sslsocket_class(

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 319, in __init__

    raise x

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 315, in __init__

    self.do_handshake()

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 673, in do_handshake

    self._sslobj.do_handshake()

ssl.SSLEOFError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen

    response = self._make_request(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 490, in _make_request

    raise new_e

urllib3.exceptions.SSLError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



The above exception was the direct cause of the following exception:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send

    resp = conn.urlopen(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 843, in urlopen

    retries = retries.increment(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment

    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/core/app/apps/chat/app_generator.py", line 211, in _generate_worker

    runner.run(

  File "/app/api/core/app/apps/chat/app_runner.py", line 55, in run

    self.get_pre_calculate_rest_tokens(

  File "/app/api/core/app/apps/base_app_runner.py", line 81, in get_pre_calculate_rest_tokens

    prompt_tokens = model_instance.get_llm_num_tokens(

  File "/app/api/core/model_manager.py", line 151, in get_llm_num_tokens

    return self._round_robin_invoke(

  File "/app/api/core/model_manager.py", line 303, in _round_robin_invoke

    return function(*args, **kwargs)

  File "/app/api/core/model_runtime/model_providers/openai/llm/llm.py", line 244, in get_num_tokens

    return self._num_tokens_from_messages(base_model, prompt_messages, tools)

  File "/app/api/core/model_runtime/model_providers/deepseek/llm/llm.py", line 57, in _num_tokens_from_messages

    encoding = tiktoken.get_encoding("cl100k_base")

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/registry.py", line 73, in get_encoding

    enc = Encoding(**constructor())

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 72, in cl100k_base

    mergeable_ranks = load_tiktoken_bpe(

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 147, in load_tiktoken_bpe

    contents = read_file_cached(tiktoken_bpe_file, expected_hash)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 64, in read_file_cached

    contents = read_file(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 25, in read_file

    resp = requests.get(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 73, in get

    return request("get", url, params=params, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 59, in request

    return session.request(method=method, url=url, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request

    resp = self.send(prep, **send_kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send

    r = adapter.send(request, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 517, in send

    raise SSLError(e, request=request)

requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

2024-09-10 07:49:09,596.596 ERROR [Dummy-19] [base_app_generate_response_converter.py:125] - HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

2024-09-10 07:49:59,488.488 ERROR [Thread-20 (_generate_worker)] [app_generator.py:232] - Unknown Error when generating

Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request

    self._validate_conn(conn)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn

    conn.connect()

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 652, in connect

    sock_and_verified = _ssl_wrap_socket_and_match_hostname(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 805, in _ssl_wrap_socket_and_match_hostname

    ssl_sock = ssl_wrap_socket(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket

    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl

    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 121, in wrap_socket

    return self.sslsocket_class(

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 319, in __init__

    raise x

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 315, in __init__

    self.do_handshake()

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 673, in do_handshake

    self._sslobj.do_handshake()

ssl.SSLEOFError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen

    response = self._make_request(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 490, in _make_request

    raise new_e

urllib3.exceptions.SSLError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



The above exception was the direct cause of the following exception:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send

    resp = conn.urlopen(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 843, in urlopen

    retries = retries.increment(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment

    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/core/app/apps/chat/app_generator.py", line 211, in _generate_worker

    runner.run(

  File "/app/api/core/app/apps/chat/app_runner.py", line 55, in run

    self.get_pre_calculate_rest_tokens(

  File "/app/api/core/app/apps/base_app_runner.py", line 81, in get_pre_calculate_rest_tokens

    prompt_tokens = model_instance.get_llm_num_tokens(

  File "/app/api/core/model_manager.py", line 151, in get_llm_num_tokens

    return self._round_robin_invoke(

  File "/app/api/core/model_manager.py", line 303, in _round_robin_invoke

    return function(*args, **kwargs)

  File "/app/api/core/model_runtime/model_providers/openai/llm/llm.py", line 244, in get_num_tokens

    return self._num_tokens_from_messages(base_model, prompt_messages, tools)

  File "/app/api/core/model_runtime/model_providers/deepseek/llm/llm.py", line 57, in _num_tokens_from_messages

    encoding = tiktoken.get_encoding("cl100k_base")

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/registry.py", line 73, in get_encoding

    enc = Encoding(**constructor())

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 72, in cl100k_base

    mergeable_ranks = load_tiktoken_bpe(

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 147, in load_tiktoken_bpe

    contents = read_file_cached(tiktoken_bpe_file, expected_hash)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 64, in read_file_cached

    contents = read_file(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 25, in read_file

    resp = requests.get(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 73, in get

    return request("get", url, params=params, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 59, in request

    return session.request(method=method, url=url, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request

    resp = self.send(prep, **send_kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send

    r = adapter.send(request, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 517, in send

    raise SSLError(e, request=request)

requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

2024-09-10 07:49:59,509.509 ERROR [Dummy-21] [base_app_generate_response_converter.py:125] - HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

[2024-09-10 07:59:55 +0000] [1] [INFO] Handling signal: term

Running migrations

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

Preparing database migration...

Database migration skipped

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

[2024-09-10 08:03:18 +0000] [1] [INFO] Starting gunicorn 22.0.0

[2024-09-10 08:03:18 +0000] [1] [INFO] Listening at: http://0.0.0.0:5001 (1)

[2024-09-10 08:03:18 +0000] [1] [INFO] Using worker: gevent

[2024-09-10 08:03:18 +0000] [30] [INFO] Booting worker with pid: 30

2024-09-10 08:04:47,582.582 ERROR [Thread-3 (_generate_worker)] [app_generator.py:232] - Unknown Error when generating

Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request

    self._validate_conn(conn)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn

    conn.connect()

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 652, in connect

    sock_and_verified = _ssl_wrap_socket_and_match_hostname(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 805, in _ssl_wrap_socket_and_match_hostname

    ssl_sock = ssl_wrap_socket(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket

    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl

    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 121, in wrap_socket

    return self.sslsocket_class(

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 319, in __init__

    raise x

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 315, in __init__

    self.do_handshake()

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 673, in do_handshake

    self._sslobj.do_handshake()

ssl.SSLEOFError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen

    response = self._make_request(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 490, in _make_request

    raise new_e

urllib3.exceptions.SSLError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



The above exception was the direct cause of the following exception:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send

    resp = conn.urlopen(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 843, in urlopen

    retries = retries.increment(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment

    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/core/app/apps/chat/app_generator.py", line 211, in _generate_worker

    runner.run(

  File "/app/api/core/app/apps/chat/app_runner.py", line 55, in run

    self.get_pre_calculate_rest_tokens(

  File "/app/api/core/app/apps/base_app_runner.py", line 81, in get_pre_calculate_rest_tokens

    prompt_tokens = model_instance.get_llm_num_tokens(

  File "/app/api/core/model_manager.py", line 151, in get_llm_num_tokens

    return self._round_robin_invoke(

  File "/app/api/core/model_manager.py", line 303, in _round_robin_invoke

    return function(*args, **kwargs)

  File "/app/api/core/model_runtime/model_providers/openai/llm/llm.py", line 244, in get_num_tokens

    return self._num_tokens_from_messages(base_model, prompt_messages, tools)

  File "/app/api/core/model_runtime/model_providers/deepseek/llm/llm.py", line 57, in _num_tokens_from_messages

    encoding = tiktoken.get_encoding("cl100k_base")

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/registry.py", line 73, in get_encoding

    enc = Encoding(**constructor())

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 72, in cl100k_base

    mergeable_ranks = load_tiktoken_bpe(

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 147, in load_tiktoken_bpe

    contents = read_file_cached(tiktoken_bpe_file, expected_hash)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 64, in read_file_cached

    contents = read_file(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 25, in read_file

    resp = requests.get(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 73, in get

    return request("get", url, params=params, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 59, in request

    return session.request(method=method, url=url, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request

    resp = self.send(prep, **send_kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send

    r = adapter.send(request, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 517, in send

    raise SSLError(e, request=request)

requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

2024-09-10 08:04:47,608.608 ERROR [Dummy-2] [base_app_generate_response_converter.py:125] - HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

2024-09-10 08:05:17,687.687 ERROR [Thread-4 (_generate_worker)] [app_generator.py:232] - Unknown Error when generating

Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request

    self._validate_conn(conn)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn

    conn.connect()

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 652, in connect

    sock_and_verified = _ssl_wrap_socket_and_match_hostname(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 805, in _ssl_wrap_socket_and_match_hostname

    ssl_sock = ssl_wrap_socket(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket

    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl

    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 121, in wrap_socket

    return self.sslsocket_class(

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 319, in __init__

    raise x

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 315, in __init__

    self.do_handshake()

  File "/app/api/.venv/lib/python3.10/site-packages/gevent/ssl.py", line 673, in do_handshake

    self._sslobj.do_handshake()

ssl.SSLEOFError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen

    response = self._make_request(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 490, in _make_request

    raise new_e

urllib3.exceptions.SSLError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)



The above exception was the direct cause of the following exception:



Traceback (most recent call last):

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send

    resp = conn.urlopen(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 843, in urlopen

    retries = retries.increment(

  File "/app/api/.venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment

    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "/app/api/core/app/apps/chat/app_generator.py", line 211, in _generate_worker

    runner.run(

  File "/app/api/core/app/apps/chat/app_runner.py", line 55, in run

    self.get_pre_calculate_rest_tokens(

  File "/app/api/core/app/apps/base_app_runner.py", line 81, in get_pre_calculate_rest_tokens

    prompt_tokens = model_instance.get_llm_num_tokens(

  File "/app/api/core/model_manager.py", line 151, in get_llm_num_tokens

    return self._round_robin_invoke(

  File "/app/api/core/model_manager.py", line 303, in _round_robin_invoke

    return function(*args, **kwargs)

  File "/app/api/core/model_runtime/model_providers/openai/llm/llm.py", line 244, in get_num_tokens

    return self._num_tokens_from_messages(base_model, prompt_messages, tools)

  File "/app/api/core/model_runtime/model_providers/deepseek/llm/llm.py", line 57, in _num_tokens_from_messages

    encoding = tiktoken.get_encoding("cl100k_base")

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/registry.py", line 73, in get_encoding

    enc = Encoding(**constructor())

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 72, in cl100k_base

    mergeable_ranks = load_tiktoken_bpe(

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 147, in load_tiktoken_bpe

    contents = read_file_cached(tiktoken_bpe_file, expected_hash)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 64, in read_file_cached

    contents = read_file(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/tiktoken/load.py", line 25, in read_file

    resp = requests.get(blobpath)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 73, in get

    return request("get", url, params=params, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/api.py", line 59, in request

    return session.request(method=method, url=url, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request

    resp = self.send(prep, **send_kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send

    r = adapter.send(request, **kwargs)

  File "/app/api/.venv/lib/python3.10/site-packages/requests/adapters.py", line 517, in send

    raise SSLError(e, request=request)

requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))

2024-09-10 08:05:17,712.712 ERROR [Dummy-5] [base_app_generate_response_converter.py:125] - HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))`

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

@AAEE86 AAEE86 changed the title Docker 启动报错 大量模型无法使用 Docker startup error. A large number of models cannot be used. Sep 10, 2024
@AAEE86 AAEE86 changed the title Docker startup error. A large number of models cannot be used. Docker startup error. Some models cannot be used. Sep 10, 2024
@dosubot dosubot bot added the 🐞 bug Something isn't working label Sep 10, 2024
Copy link

dosubot bot commented Sep 10, 2024

I found some similar issues and discussions that might help you resolve the errors you're encountering:

  1. SSL Issues:

  2. Connection Refused Errors:

  3. Missing Models:

    • Issue: Docker showing errors related to missing models (Closed)
      • Solution: Ensure that PyTorch, TensorFlow >= 2.0, or Flax are installed and available in your Docker environment. The error "None of PyTorch, TensorFlow >= 2.0, or Flax have been found" indicates that these dependencies are missing [3].

For SSL certificate verification in Docker, Dify uses a certbot container to manage SSL certificates. Ensure that the SSL certificates are correctly configured and available in the expected locations. Refer to the docker/.env.example file for the default paths and configurations [4][5][6].

By addressing these issues, you should be able to resolve the errors related to missing models and SSL issues in your Docker setup.

To continue talking to Dosu, mention @dosu.

@crazywoola
Copy link
Member

crazywoola commented Sep 10, 2024

It seems to be a network issue.
We are about to release v0.8.0 this week. So you could try that time.

@crazywoola crazywoola added the 🤔 cant-reproduce We can not reproduce it or the information is limited label Sep 10, 2024
@AAEE86
Copy link
Contributor Author

AAEE86 commented Sep 10, 2024

It seems to be a network issue. We are about to release v0.8.0 this week. So you could try that time.

It has been released. I rebuilt the official version v0.8.0. The log no longer reports errors,.

@AAEE86 AAEE86 closed this as completed Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working 🤔 cant-reproduce We can not reproduce it or the information is limited
Projects
None yet
Development

No branches or pull requests

2 participants