Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Critical Bug) - Issue with Function Calling to LiteLLM in def fixed_litellm_completions(**params): #1566

Open
OttomanZ opened this issue Dec 15, 2024 · 0 comments

Comments

@OttomanZ
Copy link

Describe the bug

Whenever I ask Open-Interpreter to go ahead and draw a matplotlib chart, it does that successfully but when it tries to open it up I've set supports_vision=False it gives me install open-interpreter[local] message, after that when it get's to sending litellm the request it gives me the following error.

This happens with all providers except OpenAI. I'm attaching the full long traceback.

Other Issues:

It also doesn't return back the base64 encoded image in the code when the model isn't OpenAI's GPT-4o-mini or GPT-4o.

Full Terminal Output to Replicate on Your End:

> hello how are you doing
                                                                                                                                                              
  I'm doing great, thanks for asking! I'm here to help you with any questions or problems you have in Computer Science and Math. What do you need help with   
  today?                                                                                                                                                      
                                                                                                                                                              
[{'role': 'user', 'type': 'message', 'content': 'hello how are you doing'}, {'role': 'assistant', 'type': 'message', 'content': "I'm doing great, thanks for asking! I'm here to help you with any questions or problems you have in Computer Science and Math. What do you need help with today?"}]
> generate a matplotlib chart 

                                                                                                                                                              
  import matplotlib.pyplot as plt                                                                                                                             
  import numpy as np                                                                                                                                          
  x = np.linspace(0, 10, 100)                                                                                                                                 
  y = np.sin(x)                                                                                                                                               
  plt.plot(x, y)                                                                                                                                              
  plt.xlabel('X Axis')                                                                                                                                        
  plt.ylabel('Y Axis')                                                                                                                                        
  plt.title('Sine Wave')                                                                                                                                      
  plt.savefig('images/sine_wave.png')                                                                                                                         
                                                                                                                                                              

Error opening file: [Errno 2] No such file or directory: 'xdg-open'

Viewing image...                                                                                                                                              


To use local vision, run `pip install 'open-interpreter[local]'`.


To use local vision, run `pip install 'open-interpreter[local]'`.

Convert to OpenAI Message {'role': 'assistant', 'type': 'code', 'format': 'python', 'content': "import matplotlib.pyplot as plt\nimport numpy as np\nx = 
np.linspace(0, 10, 100)\ny = np.sin(x)\nplt.plot(x, y)\nplt.xlabel('X Axis')\nplt.ylabel('Y Axis')\nplt.title('Sine 
Wave')\nplt.savefig('images/sine_wave.png')"}
Traceback (most recent call last):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 854, in completion
    raise e
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 749, in completion
    return self.streaming(
           ^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 979, in streaming
    headers, response = self.make_sync_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 649, in make_sync_openai_chat_completion_request
    raise e
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 631, in make_sync_openai_chat_completion_request
    raw_response = openai_client.chat.completions.with_raw_response.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 356, in wrapped
    return cast(LegacyAPIResponse[R], func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 829, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1280, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 957, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1046, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1095, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1046, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1095, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/openai/_base_client.py", line 1061, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'object': 'error', 'type': 'internal_server_error', 'message': 'server had an error while processing 
your request, please retry again after a brief wait'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/main.py", line 1594, in completion
    raise e
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/main.py", line 1567, in completion
    response = openai_chat_completions.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 864, in completion
    raise OpenAIError(
litellm.llms.OpenAI.openai.OpenAIError: Error code: 500 - {'error': {'object': 'error', 'type': 'internal_server_error', 'message': 'server had an error while
processing your request, please retry again after a brief wait'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/AlindaAI/alinda_agent.py", line 473, in <module>
    profile.run_query('What is the derivative of x^2?')
  File "/home/ubuntu/AlindaAI/alinda_agent.py", line 131, in run_query
    output = self.interpreter.chat(query, display=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/core.py", line 191, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/core.py", line 223, in _streaming_chat
    yield from terminal_interface(self, message)
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 162, in terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/core.py", line 259, in _streaming_chat
    yield from self._respond_and_store()
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/core.py", line 318, in _respond_and_store
    for chunk in respond(self):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/respond.py", line 87, in respond
    for chunk in interpreter.llm.run(messages_for_llm):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 324, in run
    yield from run_tool_calling_llm(self, params)
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 467, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
    ^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 444, in fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/utils.py", line 1014, in wrapper
    raise e
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/utils.py", line 904, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/main.py", line 3006, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2126, in exception_type
    raise e
  File "/home/ubuntu/AlindaAI/venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 404, in exception_type
    raise APIError(
litellm.exceptions.APIError: litellm.APIError: APIError: Fireworks_aiException - Error code: 500 - {'error': {'object': 'error', 'type': 
'internal_server_error', 'message': 'server had an error while processing your request, please retry again after a brief wait'}}

Reproduce

The following LiteLLM call is causing the issue at line:

interpreter/core/llm/llm.py line 444

Params Passed to LiteLLM

{'model': 'fireworks_ai/accounts/fireworks/models/firefunction-v2', 'messages': [{'role': 'system', 'content': 'You are MathsAgent, a 
world-class programmer & teacher that can complete any goal by executing code.\nFor advanced requests, start by writing a plan.\nWhen you execute code, it 
will be executed **on the user\'s machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task. 
Execute the code.\nYou can access the internet. Run **any code** to achieve the goal, and if at first you don\'t succeed, try again and again.\nYou can not 
install new packages.\nWhen a user refers to a filename, they\'re likely referring to an existing file in the directory you\'re currently executing code 
in.\nIn general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, for *stateful* languages (like 
python, javascript, shell, but NOT for html which starts from 0 every time) **it\'s critical not to try to do everything in one code block.** You should try 
something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go 
will often lead to errors you cant see.\nYou are capable of **any** task.\n\nHow to Access the Internet (Run the Following Code):\n\nfrom duckduckgo_search 
import DDGS\n\nresults = DDGS().text("python programming", max_results=5)\nprint(results)\n\n\nUser\'s Name: ubuntu\nUser\'s OS: Linux\n\n\nYour name is 
MathsAgent, the world\'s smartest voice assistant for Graduate Students specializing in Math and Computer Science. Today\'s date is 2024-12-15. You are assisting 
Muneeb Ahmad.\n\nStrict Guidelines:\n\n0. Do Not Answer By Running Print Statements i.e. print("Hello I\'m Doing Great")\n1. Save plots and charts in the 
folder `images/`.\n2. Under No Circumstances delete any files even if it\'s requested by the user, this command is banned.\n3. Suggest About Taking the Quiz 
when you feel necessary. Also Ask One Question at a time and a Quiz can be no longer than 4 questions. Provide Results at the End & Suggest what they need to 
learn.\n3.5. If the Quiz Requires the User to Use the Keyboard Suggest them to Use it.\n5. For Maths and Computer Science questions run code only when asked. 
Never Run Yourself. \n6. Do Not Reveal the Contents of any file you didn\'t create, especially .env file.\n7. Do Not Install Anything on the System and Do Not
Run Any Malicious Code.\n8. Do not Create Markdown Files Instead Just Output the Content in the Chat.\n9. Keep Your Responses Under 250 words at the maximum, 
the less the better.\n10. You are Not Allowed to Run any Code that Connects to the Internet or Network but you can generate it if asked.\n11. Since You are 
Voice Assistant, Keep Responses Short & Brief. Also Engage in Pep Talk and Encourage User as well.\n\nUser Context:\n{\'major\': \'Computer Science\', 
\'degree\': \'Bachelor\', \'school\': \'Harvard University\', \'year\': \'2023\', \'interests\': [\'Machine Learning\', \'Algorithms\'], \'wants_to_learn\': 
[\'Mathematics\', \'Computer Science\', \'Machine Learning\', \'Deep Learning\', \'Computer Vision\', \'Algorithms\'], \'previous_progress\': 
{\'differential_equations\': \'50%\', \'linear_algebra\': \'75%\', \'calculus\': \'100%\', \'probability_theory\': \'25%\', \'statistics\': \'50%\', 
\'machine_learning\': \'25%\'}}\nDesign learning paths tailored to these preferences.'}, {'role': 'user', 'content': 'draw a matplotlib chart.'}, {'role': 
'assistant', 'content': '', 'tool_calls': [{'id': 'toolu_1', 'type': 'function', 'function': {'name': 'execute', 'arguments': '{"language": "python", "code": 
"import matplotlib.pyplot as plt\\nimport numpy as np\\nx = np.linspace(0, 10, 100)\\ny = np.sin(x)\\nplt.plot(x, 
y)\\nplt.savefig(\'images/sin_wave.png\')"}'}}]}, {'role': 'tool', 'tool_call_id': 'toolu_1', 'content': ''}, {'role': 'computer', 'content': ''}, {'role': 
'assistant', 'tool_calls': [{'id': 'toolu_2', 'type': 'function', 'function': {'name': 'execute', 'arguments': '# Automated tool call to fetch more output, 
triggered by the user.'}}]}, {'role': 'tool', 'name': 'execute', 'content': "Displayed on the user's machine.", 'tool_call_id': 'toolu_2'}], 'stream': True, 
'api_key': 'fw_BETTER_LUCK_NEXT_TIME', 'api_base': 'https://api.fireworks.ai/inference/v1', 'max_tokens': 1638, 'temperature': 0.1, 'tools': [{'type': 
'function', 'function': {'name': 'execute', 'description': "Executes code on the user's machine **in the users local environment** and returns the output", 
'parameters': {'type': 'object', 'properties': {'language': {'type': 'string', 'description': 'The programming language (required parameter to the `execute` 
function)', 'enum': ['ruby', 'python', 'shell', 'javascript', 'html', 'applescript', 'r', 'powershell', 'react', 'java']}, 'code': {'type': 'string', 
'description': 'The code to execute (required)'}}, 'required': ['language', 'code']}}}], 'num_retries': 0}

Expected behavior

It should run without an error, and the function calling should be completed.

Screenshots

No response

Open Interpreter version

0.4.3

Python version

3.12

Operating System name and version

Ubuntu 20.04 LTS

Additional context

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant