Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Code 400. Can ask follow up questions with agent_executor #15050

Closed
2 of 14 tasks
shamussim-ai opened this issue Dec 22, 2023 · 4 comments
Closed
2 of 14 tasks

Error Code 400. Can ask follow up questions with agent_executor #15050

shamussim-ai opened this issue Dec 22, 2023 · 4 comments
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@shamussim-ai
Copy link

System Info

Python 3.10.10
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
google-search-results 2.4.2
Windows

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

Summary
Firstly, I ask the LangChain agent using code below and get a good response.
result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})

However, I ask a new followup question by changing the query and get and error.
result = agent_executor({"What other forms of acne treatment are there for male teenagers around the age of 12 years old ?"})

BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid.

Code Used

model = AzureChatOpenAI(
    openai_api_version="2023-12-01-preview",
    azure_deployment="35-16k",
    temperature=0
)

# Define which tools the agent can use to answer user queries
SERPAPI_API_KEY='------'
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
tools= [
    Tool(
        name="Search",
        func=search.run,
        description="useful for when you need to answer questions about current events"
    )
]
# This is needed for both the memory and the prompt
memory_key = "history"

from langchain.agents.openai_functions_agent.agent_token_buffer_memory import (
    AgentTokenBufferMemory,
)

memory = AgentTokenBufferMemory(memory_key=memory_key, llm=model, max_token_limit = 13000)

from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain.prompts import MessagesPlaceholder
from langchain_core.messages import SystemMessage
system_message = SystemMessage(
    content=(
        "You are an expert Doctor."
        "Use the web search tool to gain insights from web search, summarise the results and provide an answer."
        "Keep the source links during generation to produce inline citation."
        "Cite search results using [${{number}}] notation. Only cite the most \
        relevant results that answer the question accurately. Place these citations at the end \
        of the sentence or paragraph that reference them - do not put them all at the end. If \
        different results refer to different entities within the same name, write separate \
        answers for each entity. If you want to cite multiple results for the same sentence, \
        format it as `[${{number1}}] [${{number2}}]`. However, you should NEVER do this with the \
        same number - if you want to cite `number1` multiple times for a sentence, only do \
        `[${{number1}}]` not `[${{number1}}] [${{number1}}]`"
        "From the retrieved results, generate an answer."
        "At the end  of the answer, show the source to the [${{number}}]. Follow this format, every [${{number}}] notation is followed by the corresponding source link "
        "You should use bullet points in your answer for readability. Put citations where they apply \
        rather than putting them all at the end."
    )
)

prompt = OpenAIFunctionsAgent.create_prompt(
    system_message=system_message,
    extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
)

#The Agent
agent = OpenAIFunctionsAgent(llm=model, tools=tools, prompt=prompt)

#The Agent Executor
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    memory=memory,
    verbose=True,
    return_intermediate_steps=True,
)

result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})

Output
Shows good results

Next I run agen_executor with another question and this is where I get a bug.
result = agent_executor({"new question"})
Even if I just repeat the question ie use the initial one. I get this error.

Error
`---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[15], line 1
----> 1 result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})

File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.call(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
310 except BaseException as e:
311 run_manager.on_chain_error(e)
--> 312 raise e
313 run_manager.on_chain_end(outputs)
314 final_outputs: Dict[str, Any] = self.prep_outputs(
315 inputs, outputs, return_only_outputs
316 )

File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.call(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
299 run_manager = callback_manager.on_chain_start(
300 dumpd(self),
301 inputs,
302 name=run_name,
303 )
304 try:
305 outputs = (
--> 306 self._call(inputs, run_manager=run_manager)
307 if new_arg_supported
308 else self._call(inputs)
...
(...)
937 stream_cls=stream_cls,
938 )

BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}`

Expected behavior

Formatted normal output

Copy link

dosubot bot commented Dec 22, 2023

🤖

Hey @shamussim-ai! I'm Dosu, a friendly bot here to assist you with your LangChain problems while the human maintainers are busy. I can help you solve bugs, answer your questions, and guide you on becoming a better contributor to our project. Remember, no question is too small or too big for me. Let's get that issue sorted out together!

Based on the error message you provided, it seems like the issue might be related to the format of the input you're passing to the agent_executor function. The error message suggests that the '$.messages[1].content' field is invalid, which indicates that the second message in the conversation (i.e., your follow-up question) might not be formatted correctly.

In LangChain, the agent_executor function doesn't directly take user input. Instead, it uses certain input variables defined as previous_steps, current_step, and agent_scratchpad. If include_task_in_prompt is True, objective is also added to the input variables. These variables are used to format the HUMAN_MESSAGE_TEMPLATE or TASK_PREFIX + HUMAN_MESSAGE_TEMPLATE (if include_task_in_prompt is True), which is used as the prompt for the language model.

Given this, it's possible that the error is occurring because the current_step (i.e., your follow-up question) is not being formatted correctly. To resolve this, you might need to ensure that your follow-up question is correctly formatted as a current_step input variable.

If you're still encountering issues, it would be helpful if you could provide more specific details about how you're formatting your input for the agent_executor function, as well as any modifications you've made to the LangChain scripts. This will help me better understand the context in which the error is occurring and provide a more accurate solution.

For more information, you can refer to the LangChain source code:

I hope this helps! If you have any other questions, feel free to ask.

Best,
Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@shamussim-ai shamussim-ai changed the title Error Code 400. Asking follow up questions with agent_executor Error Code 400. Can ask follow up questions with agent_executor Dec 22, 2023
@dosubot dosubot bot added Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Dec 22, 2023
@cyai
Copy link
Contributor

cyai commented Dec 23, 2023

@shamussim-ai This seems like a weird error. Could you try using invoke function of agent_executor object.

Here,

result = agent_executor.invoke({"input": "What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})

@shamussim-ai
Copy link
Author

shamussim-ai commented Dec 25, 2023

Hi @cyai. Appreciate the response.

That helped but the error still persists. I can ask a follow-up question. However, on the 2nd or 3rd follow-up question. I am returned with the same error message. For your reference, please see the trace below.

BadRequestError                           Traceback (most recent call last)
Cell In[30], [line 1](vscode-notebook-cell:?execution_count=30&line=1)
----> [1](vscode-notebook-cell:?execution_count=30&line=1) result = agent_executor.invoke({"input": "What is the management for childhood asthma?"})

File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:89, in Chain.invoke(self, input, config, **kwargs)
     [82](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:82) def invoke(
     [83](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:83)     self,
     [84](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:84)     input: Dict[str, Any],
     [85](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:85)     config: Optional[RunnableConfig] = None,
     [86](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:86)     **kwargs: Any,
     [87](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:87) ) -> Dict[str, Any]:
     [88](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:88)     config = config or {}
---> [89](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:89)     return self(
     [90](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:90)         input,
     [91](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:91)         callbacks=config.get("callbacks"),
     [92](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:92)         tags=config.get("tags"),
     [93](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:93)         metadata=config.get("metadata"),
     [94](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:94)         run_name=config.get("run_name"),
     [95](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:95)         **kwargs,
     [96](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:96)     )

File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    [310](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:310) except BaseException as e:
    [311](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:311)     run_manager.on_chain_error(e)
...
   (...)
    [937](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:937)     stream_cls=stream_cls,
    [938](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:938) )

BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[5].content' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Additionally, I get throw a warning when the agen_executor.invoke runs successfully. Any advice on how to deal with it. Where should I add the output_parser?


Invoking: `Cardiology_CPG` with `{'query': 'CPG Management of Dyslipidaemia 2017 guidelines'}`


/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/llm.py:316: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
  warnings.warn(```

@shamussim-ai
Copy link
Author

Hi @cyai. Wondering if you have any other solution for this. Error still persists.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 4, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 11, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants