Not routing to correct agent in langgraph #24299
Replies: 1 comment
-
The issue you're encountering might be due to the way the
Here is a revised version of your code with added debug statements: import json
from langchain import load_llm, ChatPromptTemplate, MessagesPlaceholder, JsonOutputFunctionsParser
llm = load_llm()
members = ["Jira_userstory_analyst", "Product Quality Engineer", "General_agent"]
system_prompt = (
"You are a supervisor tasked with managing"
" following workers: {members}. Product Quality Engineer worker has details about failure cases, serial numbers affected and their issue classifications. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
options = ["FINISH"] + members
function_def = {
"name": "route",
"description": "Select the next role.",
"parameters": {
"title": "routeSchema",
"type": "object",
"properties": {
"next": {
"title": "Next",
"anyOf": [
{"enum": options},
],
}
},
"required": ["next"],
},
}
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))
supervisor_chain = (
prompt
| llm.bind_functions(functions=[function_def], function_call="route")
| JsonOutputFunctionsParser()
)
# Debugging: Print the output from the LLM and the parsed result
def debug_supervisor_chain(input_messages):
result = supervisor_chain(input_messages)
print("LLM Output:", result)
try:
parsed_result = json.loads(result)
print("Parsed Result:", parsed_result)
except json.JSONDecodeError as e:
print("Failed to parse JSON:", e)
# Example usage
input_messages = [
{"role": "user", "content": "What are serial number affected by issue classification by x in y?"}
]
debug_supervisor_chain(input_messages) This code includes a Additionally, ensure that the If the issue persists, consider reviewing the known compatibility issues between the versions of |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
There are three members that we are using in Langgraph:["Jira_userstory_analyst","Product Quality Engineer","General_agent"]
Supervisor prompt: You are a supervisor tasked with managing following workers: Jira_userstory_analyst, Product Quality Engineer, General_agent. Product Quality Engineer worker has details about failure cases, serial numbers affected and their issue classifications. Given the following user request, respond with the worker to act next. Each worker will perform a task and respond with their results and status. When finished, respond with FINISH. System: Given the conversation above, who should act next? Or should we FINISH? Select one of: ['FINISH', 'Jira_userstory_analyst', 'Product Quality Engineer', 'General_agent']"
Question prompt: What are serial number affected by issue classification by x in y?
In this scenario, it should go to Product Quality Engineer , but it goes to Jira_userstory_analyst.
Please advise
System Info
openai==1.35.3
langchain==0.2.5
langchain-openai==0.1.9
langchain-community==0.2.5
langchain_experimental==0.0.60
tabulate==0.9.0
snowflake-sqlalchemy==1.5.1
Beta Was this translation helpful? Give feedback.
All reactions