-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Agents getting stuck in loop since move to LiteLLM #1355
Comments
Thanks @chappers00 will get someone from the team to look into it today |
Request access to sonnet 3.5 through bedrock so I cna try to replicate this |
I've run into the same issue as well.
Using the various GCP Gemini models. Request:
And response:
|
Found the bug, pushing a fix |
I have not been able to test against sonnet yet, but I was able to replicate this with another LLM. |
Thanks @joaomdmoura I can confirm it's working now with Bedrock + Anthropic on 0.64.0, appreciate the swift fix |
Description
After updating to CrewAI 0.63.6 and changing the agent to the new LiteLLM style configuration tasks which were taking around 15-20 seconds are now sometimes not completing, and when they are completing it is taking up to 10 calls to the LLM before it can decide on calling a tool.
We use Anthropic Claude 3.5 Sonnet via AWS Bedrock
The agent was previously able to complete tasks with just a few LLM calls, and now it seems to get stuck in a loop repeatedly calling the LLM.
Running with Debug logging shows this body of text in the completion request sent by LiteLLM:
{"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}
Steps to Reproduce
Create a single agent crew with a single task where there are tools attached to the task. (See Screenshots/Code snippets for full steps to reproduce)
Expected behavior
The agent should use a tool that it has access to in order to complete the task within a reasonable number of calls to the LLM
Screenshots/Code snippets
import os
from crewai import Agent, Crew, Task
from crewai import LLM
from langchain_community.agent_toolkits import FileManagementToolkit
MODEL_ID = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
DIRECTORY = "."
os.environ["AWS_REGION_NAME"] = "us-east-1"
os.environ["AWS_DEFAULT_REGION"] = "us-east-1"
agent_llm = LLM(model=MODEL_ID)
tools = FileManagementToolkit(
root_dir=str(DIRECTORY),
selected_tools=["read_file", "write_file", "list_directory"]
).get_tools()
agent = Agent(
role="Example Agent Role",
goal= "Example Agent goal",
backstory="Example agent backstory",
memory=True,
verbose=True,
allow_delegation=True,
llm=agent_llm
)
task = Task(
description = (
"Get any available information to help the Developer understand "
"the business context of the application under test"
),
expected_output = "A concise summary provided to the Developer "
"of the any relevant documentation including READMEs.",
tools=tools,
agent= agent
)
crew = Crew(
agents=[agent],
tasks=[task],
memory=True,
embedder={
"provider": "aws_bedrock",
"config":{
"model": 'amazon.titan-embed-text-v2:0',
"deployment_name": 'ec_embeddings_titan_v2'
}
}
)
crew.kickoff()
Operating System
macOS Sonoma
Python Version
3.12
crewAI Version
0.63.5
crewAI Tools Version
0.12.1
Virtual Environment
Venv
Evidence
POST Request Sent from LiteLLM: curl -X POST \ https://bedrock-runtime.us-east-1.amazonaws.com/model/anthropic.claude-3-5-sonnet-20240620-v1:0/converse \ -H 'Content-Type: *****' -H 'X-Amz-Date: *****' -H 'X-Amz-Security-Token:***********************************************' -H 'Authorization: *** Credential=****/us-east-1/bedrock/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-security-token, Signature=************************************************' -H 'Content-Length: *****' \ -d '{"messages": [{"role": "user", "content": [{"text": "\nCurrent Task: Get any available information to help the Developer understand the business context of the application under test\n\nThis is the expect criteria for your final answer: A concise summary provided to the Developer of the any relevant documentation including READMEs.\nyou MUST return the actual complete content as the final answer, not a summary.\n\n# Useful context: \nHistorical Data:\n- Include more specific details about the application's functionality\n- Provide information on how the application fits into larger business processes\n- Explain any integration points with other systems or services\n- Describe the target users or customers of the application\n- Outline any key business requirements or constraints\n- Provide a more concise summary focusing on key business context\n- Include information about the application's intended users or stakeholders\n- Highlight any specific business problems or needs the application addresses\n- Mention any integration points with other systems or services\n- Include information about the development team or organization behind the application\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}, {"text": "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"}]}], "additionalModelRequestFields": {}, "system": [{"text": "You are Example Agent Role. Example agent backstory\nYour personal goal is: Example Agent goal\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: read_file\nTool Description: Read file from disk\nTool Arguments: {'file_path': {'title': 'File Path', 'description': 'name of file', 'type': 'string'}}\nTool Name: write_file\nTool Description: Write file to disk\nTool Arguments: {'file_path': {'title': 'File Path', 'description': 'name of file', 'type': 'string'}, 'text': {'title': 'Text', 'description': 'text to write to file', 'type': 'string'}, 'append': {'title': 'Append', 'description': 'Whether to append to an existing file.', 'default': False, 'type': 'boolean'}}\nTool Name: list_directory\nTool Description: List files and directories in a specified folder\nTool Arguments: {'dir_path': {'title': 'Dir Path', 'description': 'Subdirectory to list.', 'default': '.', 'type': 'string'}}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [read_file, write_file, list_directory], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n"}], "inferenceConfig": {}}'
Possible Solution
It seems the prompt being used with LiteLLM is not giving a valid response from the LLM
Additional context
N/A
The text was updated successfully, but these errors were encountered: