Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run interpreter --model ollama/qwen2.5:3b error #1548

Closed
unsiao opened this issue Dec 2, 2024 · 4 comments
Closed

run interpreter --model ollama/qwen2.5:3b error #1548

unsiao opened this issue Dec 2, 2024 · 4 comments

Comments

@unsiao
Copy link

unsiao commented Dec 2, 2024

Bug Description

When executing the command interpreter --model ollama/qwen2.5:3b, an error occurs with the specific error message:

json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

This error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.

Error Log


C:\Users\unsia>interpreter --model ollama/qwen2.5:3b

▌ Model set to ollama/qwen2.5:3b

Loading qwen2.5:3b...

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Scripts\interpreter.exe\__main__.py", line 7, in <module>
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
    start_terminal_interface(interpreter)
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 560, in start_terminal_interface
    validate_llm_settings(
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 109, in validate_llm_settings
    interpreter.llm.load()
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
    self.interpreter.computer.ai.chat("ping")
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
    yield from run_tool_calling_llm(self, params)
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
    ^^^^^^^^^^^^^^^^^
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 455, in ollama_completion_stream
    raise e
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 433, in ollama_completion_stream
    function_call = json.loads(response_content)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
               ^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)


Analysis Process

  • Call Stack: The error occurs in the file litellm/llms/ollama.py when attempting to parse the model's response using json.loads(response_content).
  • Potential Causes:
    • The format of the data returned by the model may not meet expectations.
    • It might be due to network issues, server-side problems, or the model's response format being non-compliant, leading to empty or partial responses from the model.

Suggested Solutions

  1. Check the Model's Response: Ensure that the API response from the model is complete and properly formatted as JSON. Debugging can be facilitated by printing out response_content.
  2. Catch Errors and Print More Information: Before calling json.loads(), add checks to ensure that response_content is indeed a valid JSON string.

Example Code:

if response_content:
    try:
        parsed_data = json.loads(response_content)
    except json.JSONDecodeError as e:
        print(f"JSON Decode Error: {e}")
        print(f"Response content: {response_content}")
else:
    print("Empty response content")

Steps to Reproduce

To be filled with specific steps to reproduce this issue.

Expected Behavior

To be filled with the expected behavior from the user's perspective.

Environment Information

  • Open Interpreter Version: Open Interpreter 0.4.3 Developer Preview
  • Python Version: Python 3.11.0
  • Operating System: Windows 11
@unsiao unsiao changed the title 当执行 interpreter --model ollama/qwen2.5:3b 就报错 run interpreter --model ollama/qwen2.5:3b error Dec 2, 2024
@Notnaton
Copy link
Collaborator

Notnaton commented Dec 2, 2024

You need to disable function calling or tool calling

@unsiao
Copy link
Author

unsiao commented Dec 3, 2024

You need to disable function calling or tool calling

Sorry, I didn't understand, how do I change it? How do I turn off a function call?

@Notnaton
Copy link
Collaborator

Notnaton commented Dec 3, 2024

interpreter --no-llm_supports_functions
Or
--no-tool-calling

You can run interpreter --help to see available settings

@unsiao
Copy link
Author

unsiao commented Dec 5, 2024

thanks!
image

done!

@unsiao unsiao closed this as completed Dec 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants