You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.
Error Log
C:\Users\unsia>interpreter --model ollama/qwen2.5:3b
▌ Model set to ollama/qwen2.5:3b
Loading qwen2.5:3b...
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 560, in start_terminal_interface
validate_llm_settings(
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 109, in validate_llm_settings
interpreter.llm.load()
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 455, in ollama_completion_stream
raise e
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 433, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
Analysis Process
Call Stack: The error occurs in the file litellm/llms/ollama.py when attempting to parse the model's response using json.loads(response_content).
Potential Causes:
The format of the data returned by the model may not meet expectations.
It might be due to network issues, server-side problems, or the model's response format being non-compliant, leading to empty or partial responses from the model.
Suggested Solutions
Check the Model's Response: Ensure that the API response from the model is complete and properly formatted as JSON. Debugging can be facilitated by printing out response_content.
Catch Errors and Print More Information: Before calling json.loads(), add checks to ensure that response_content is indeed a valid JSON string.
Bug Description
When executing the command
interpreter --model ollama/qwen2.5:3b
, an error occurs with the specific error message:This error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.
Error Log
Analysis Process
litellm/llms/ollama.py
when attempting to parse the model's response usingjson.loads(response_content)
.Suggested Solutions
response_content
.json.loads()
, add checks to ensure thatresponse_content
is indeed a valid JSON string.Example Code:
Steps to Reproduce
To be filled with specific steps to reproduce this issue.
Expected Behavior
To be filled with the expected behavior from the user's perspective.
Environment Information
The text was updated successfully, but these errors were encountered: