Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BadRequestError: 400 An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_hSmZB4G8cu3xYSWU6swBuOMo #6621

Closed
5 tasks done
arkodeep3404 opened this issue Aug 23, 2024 · 3 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@arkodeep3404
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import dotenv from "dotenv";
dotenv.config();

import { tool } from "@langchain/core/tools";

import { DynamoDBChatMessageHistory } from "@langchain/community/stores/message/dynamodb";
import { ChatOpenAI } from "@langchain/openai";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
import {
  ChatPromptTemplate,
  MessagesPlaceholder,
} from "@langchain/core/prompts";
import { HumanMessage } from "@langchain/core/messages";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";

const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    "You are a helpful assistant. Answer all questions to the best of your ability.",
  ],
  new MessagesPlaceholder("chat_history"),
  ["human", "{input}"],
  ["placeholder", "{agent_scratchpad}"],
]);

const imageTool = tool(
  async () => {
    return "image url";
  },
  {
    name: "Get-Image-Tool",
    description:
      "Use this tool if the user asks you to send them an image/picture",
  }
);

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  openAIApiKey: process.env.OPENAI_API_KEY,
});

const tools = [imageTool];

const agent = await createToolCallingAgent({
  llm,
  tools,
  prompt,
});

const agentExecutor = new AgentExecutor({ agent, tools });

const conversationalAgentExecutor = new RunnableWithMessageHistory({
  runnable: agentExecutor,
  inputMessagesKey: "input",
  outputMessagesKey: "output",
  historyMessagesKey: "chat_history",
  getMessageHistory: async (sessionId) => {
    return new DynamoDBChatMessageHistory({
      tableName: process.env.AWS_TABLE_NAME,
      partitionKey: process.env.AWS_TABLE_PARTITION_KEY,
      sessionId,
      config: {
        region: process.env.AWS_REGION,
        credentials: {
          accessKeyId: process.env.AWS_ACCESS_KEY_ID,
          secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
        },
      },
    });
  },
});

// const res1 = await chainWithHistory.invoke(
//   {
//     input: "Hi! I'm Arkodeep",
//   },
//   { configurable: { sessionId: "test" } }
// );
// console.log(res1);

/*
  "Hello MJDeligan! It's nice to meet you. My name is AI. How may I assist you today?"
*/

const res2 = await conversationalAgentExecutor.invoke(
  { input: [new HumanMessage("send me a pic")] },
  { configurable: { sessionId: "test" } }
);
console.log(res2);

/*
  "You said your name was MJDeligan."
*/

Error Message and Stack Trace (if applicable)

arkodeepchatterjee@Arkodeeps-MacBook-Air chat % node index.js
New LangChain packages are available that more efficiently handle tool calling.

Please upgrade your packages to versions that set message tool calls. e.g., yarn add @langchain/anthropic, yarn add @langchain/openai`, etc.
node:internal/process/esm_loader:40
internalBinding('errors').triggerUncaughtException(
^

BadRequestError: 400 An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_hSmZB4G8cu3xYSWU6swBuOMo
at APIError.generate (file:///Users/arkodeepchatterjee/Desktop/chat/node_modules/openai/error.mjs:41:20)
at OpenAI.makeStatusError (file:///Users/arkodeepchatterjee/Desktop/chat/node_modules/openai/core.mjs:268:25)
at OpenAI.makeRequest (file:///Users/arkodeepchatterjee/Desktop/chat/node_modules/openai/core.mjs:311:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///Users/arkodeepchatterjee/Desktop/chat/node_modules/@langchain/openai/dist/chat_models.js:1302:29
at async RetryOperation._fn (/Users/arkodeepchatterjee/Desktop/chat/node_modules/p-retry/index.js:50:12) {
status: 400,
headers: {
'access-control-expose-headers': 'X-Request-ID',
'alt-svc': 'h3=":443"; ma=86400',
'cf-cache-status': 'DYNAMIC',
'cf-ray': '8b7e5bc1e97f2961-BOM',
connection: 'keep-alive',
'content-length': '325',
'content-type': 'application/json',
date: 'Fri, 23 Aug 2024 21:57:28 GMT',
'openai-organization': 'raheel-ioccbf',
'openai-processing-ms': '21',
'openai-version': '2020-10-01',
server: 'cloudflare',
'set-cookie': '__cf_bm=xVttIsHFgX4RJcqQIP17W4kjroBwaRb2sp_eZRnTnTU-1724450248-1.0.1.1-SErkhyNCqzUpJ1D1vMzrnjhjzQibfeclp03kei7Vcoy4KakiQ7U5ezHxJxp8vU54HqKYMZT6JxJIk__XqpAAbA; path=/; expires=Fri, 23-Aug-24 22:27:28 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=wcSZXMnokbW7KcXy2h26OdsDxUPE1H6OGLz4zeYNCa8-1724450248524-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
'strict-transport-security': 'max-age=15552000; includeSubDomains; preload',
'x-content-type-options': 'nosniff',
'x-ratelimit-limit-requests': '5000',
'x-ratelimit-limit-tokens': '2000000',
'x-ratelimit-remaining-requests': '4999',
'x-ratelimit-remaining-tokens': '1999765',
'x-ratelimit-reset-requests': '12ms',
'x-ratelimit-reset-tokens': '7ms',
'x-request-id': 'req_fdd5c472dfef8a7839ec5f00aaf4d4bd'
},
request_id: 'req_fdd5c472dfef8a7839ec5f00aaf4d4bd',
error: {
message: "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_hSmZB4G8cu3xYSWU6swBuOMo",
type: 'invalid_request_error',
param: 'messages.[19].role',
code: null
},
code: null,
param: 'messages.[19].role',
type: 'invalid_request_error',
attemptNumber: 1,
retriesLeft: 6
}

Node.js v20.9.0
arkodeepchatterjee@Arkodeeps-MacBook-Air chat %

Description

message history + tool calling doesn't work because ToolMessage object doesn't exist in history stack

System Info

{
"name": "chat",
"version": "1.0.0",
"main": "index.js",
"type": "module",
"scripts": {
"test": "echo "Error: no test specified" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"@aws-sdk/client-dynamodb": "^3.637.0",
"@langchain/community": "^0.2.31",
"@langchain/core": "^0.2.28",
"@langchain/openai": "^0.2.7",
"dotenv": "^16.4.5",
"langchain": "^0.2.16",
"zod": "^3.23.8"
}
}

@ccurme ccurme transferred this issue from langchain-ai/langchain Aug 23, 2024
@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Aug 23, 2024
Copy link

dosubot bot commented Aug 23, 2024

To resolve the "BadRequestError: 400 An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'" error in your LangChain code, ensure that every tool call made by the assistant is followed by a corresponding tool message that responds to each tool_call_id.

Here are the steps to ensure this:

  1. Ensure Tool Calls are Followed by Tool Messages:

    • Make sure that for every tool call made by the assistant, there is a corresponding tool message that responds to the tool_call_id.
  2. Check the _prepareAndParseToolCall Method:

    • This method is responsible for handling tool calls. Ensure that it correctly parses and includes tool calls in the response message.
  3. Convert Messages Correctly:

    • Ensure that the convertToConverseMessages function correctly handles the tool calls and includes them in the messages.

Here is a relevant snippet from the code to help you understand how to handle tool calls:

const responseMessageWithFunctions = new AIMessage({
  content: chatGenerationContent.split("<function_calls>")[0],
  additional_kwargs: {
    tool_calls: invocations.map((toolInvocation, i) => {
      const calledTool = tools.find(
        (tool) => tool.function.name === toolInvocation.tool_name
      );
      if (calledTool === undefined) {
        throw new Error(
          `Called tool "${toolCall}" did not match an existing tool.`
        );
      }
      return {
        id: i.toString(),
        type: "function",
        function: {
          name: toolInvocation.tool_name,
          arguments: JSON.stringify(
            fixArrayXMLParameters(
              calledTool.function.parameters as JsonSchema7ObjectType,
              toolInvocation.parameters
            )
          ),
        },
      };
    }),
  },
});

Ensure that the tool_calls array in additional_kwargs contains a response for each tool_call_id. If any tool call is missing a response, it will trigger the error you are encountering.

Additionally, the convertToConverseMessages function in the libs/langchain-aws/src/common.ts file is responsible for converting messages to the format expected by Bedrock. This function ensures that tool calls are properly included in the messages:

export function convertToConverseMessages(messages: BaseMessage[]): {
  converseMessages: BedrockMessage[];
  converseSystem: BedrockSystemContentBlock[];
} {
  const converseSystem: BedrockSystemContentBlock[] = messages
    .filter((msg) => msg._getType() === "system")
    .map((msg) => {
      if (typeof msg.content === "string") {
        return { text: msg.content };
      } else if (msg.content.length === 1 && msg.content[0].type === "text") {
        return { text: msg.content[0].text };
      }
      throw new Error(
        "System message content must be either a string, or a content array containing a single text object."
      );
    });
  const converseMessages: BedrockMessage[] = messages
    .filter((msg) => msg._getType() !== "system")
    .map((msg) => {
      if (msg._getType() === "ai") {
        const castMsg = msg as AIMessage;
        const assistantMsg: BedrockMessage = {
          role: "assistant",
          content: [],
        };

        if (castMsg.tool_calls && castMsg.tool_calls.length) {
          assistantMsg.content = castMsg.tool_calls.map((tc) => ({
            toolUse: {
              toolUseId: tc.id,
              name: tc.name,
              input: tc.args,
            },
          }));
        }
        if (typeof castMsg.content === "string" && castMsg.content !== "") {
          assistantMsg.content?.push({
            text: castMsg.content,
          });
        } else if (Array.isArray(castMsg.content)) {
          const contentBlocks: ContentBlock[] = castMsg.content.map((block) => {
            if (block.type === "text" && block.text !== "") {
              return {
                text: block.text,
              };
            } else {
              const blockValues = Object.fromEntries(
                Object.values(block).filter(([key]) => key !== "type")
              );
              throw new Error(
                `Unsupported content block type: ${
                  block.type
                } with content of ${JSON.stringify(blockValues, null, 2)}`
              );
            }
          });
          assistantMsg.content = [
            ...(assistantMsg.content ? assistantMsg.content : []),
            ...contentBlocks,
          ];
        }
        return assistantMsg;
      } else if (msg._getType() === "human" || msg._getType() === "generic") {
        if (typeof msg.content === "string" && msg.content !== "") {
          return {
            role: "user" as const,
            content: [
              {
                text: msg.content,
              },
            ],
          };
        } else if (Array.isArray(msg.content)) {
          const contentBlocks: ContentBlock[] = msg.content.flatMap((block) => {
            if (block.type === "image_url") {
              const base64: string =
                typeof block.image_url === "string"
                  ? block.image_url
                  : block.image_url.url;
              return extractImageInfo(base64);
            } else if (block.type === "text") {
              return {
                text: block.text,
              };
            } else {
              throw new Error(`Unsupported content block type: ${block.type}`);
            }
          });
          return {
            role: "user" as const,
            content: contentBlocks,
          };
        } else {
          throw new Error(
            `Invalid message content: empty string. '${msg._getType()}' must contain non-empty content.`
          );
        }
      } else if (msg._getType() === "tool") {
        const castMsg = msg as ToolMessage;
        if (typeof castMsg.content === "string") {
          return {
            // Tool use messages are always from the user
            role: "user" as const,
            content: [
              {
                toolResult: {
                  toolUseId: castMsg.tool_call_id,
                  content: [
                    {
                      text: castMsg.content,
                    },
                  ],
                },
              },
            ],
          };
        } else {
          return {
            // Tool use messages are always from the user
            role: "user" as const,
            content: [
              {
                toolResult: {
                  toolUseId: castMsg.tool_call_id,
                  content: [
                    {
                      json: castMsg.content,
                    },
                  ],
                },
              },
            ],
          };
        }
      } else {
        throw new Error(`Unsupported message type: ${msg._getType()}`);
      }
    });

  // Combine consecutive user tool result messages into a single message
  const combinedConverseMessages = converseMessages.reduce<BedrockMessage[]>(
    (acc, curr) => {
      const lastMessage = acc[acc.length - 1];

      if (
        lastMessage &&
        lastMessage.role === "user" &&
        lastMessage.content?.some((c) => "toolResult" in c) &&
        curr.role === "user" &&
        curr.content?.some((c) => "toolResult" in c)
      ) {
        lastMessage.content = lastMessage.content.concat(curr.content);
      } else {
        acc.push(curr);
      }

      return acc;
    },
    []
  );

  return { converseMessages: combinedConverseMessages, converseSystem };
}

Ensure that your tool calls and responses are correctly formatted and included in the messages to avoid the error [1][2][3][4][5].

To continue talking to Dosu, mention @dosu.

@levivoelz
Copy link

I have a similar issue using tool calling, RunnableWithMessageHistory, and a prompt template. I'm not using an agent executor however. I tried passing tool messages back to invoke but no dice.

    const userNameTool = tool(
      async (data) => data,
      {
        name: "userName",
        description: "called when the user names themselves",
        schema: z.object({
          name: z.string(),
          message: z.string().describe("a message you will say to the user after learning their name"),
        })
      }
    );

    const llm = new ChatOpenAI({
      model: "gpt-4o",
      temperature: 0,
    });

    const prompt = ChatPromptTemplate.fromMessages([
      ['system', systemTemplate],
      ['placeholder', '{chat_history}'],
      ['human', "{input}"],
    ]);

    const llmWithTools = llm.bindTools(tools);
    const runnable = prompt.pipe(llmWithTools);

    export const runnableWithMessageHistory = new RunnableWithMessageHistory({
      runnable,
      getMessageHistory: async (sessionId) => new RedisChatMessageHistory({
        sessionId,
        config: {
          url: process.env.REDIS_URL,
        },
      }),
      inputMessagesKey: "input",
      historyMessagesKey: "chat_history",
    });
    
    const message = {
      agent_name: "Bob",
      caller_name: "Frankie",
      caller_preferences: ['likes milk', 'eats candy'],
      input: "my name is josh, actually",
    };

    const options = {
      configurable: {
        sessionId: 'hanky_panky',
      },
    };

    const stream = await runnableWithMessageHistory.stream(message, options);

    let gathered = undefined;
    const toolMessages = [];

    for await (const chunk of stream) {
      gathered = gathered !== undefined ? concat(gathered, chunk) : chunk;

      if (chunk.content) {
        // send
      }

      if (chunk.response_metadata.finish_reason === "tool_calls") {
        const toolCalls = gathered.tool_calls;

        for (const toolCall of toolCalls) {
          const tool = toolsByName[toolCall.name];
          const toolMessage = await tool.invoke(toolCall);
          // getting this error: "An assistant message with 'tool_calls' must be followed by tool toolMessages responding to each 'tool_call_id'.

          toolMessages.push(toolMessage);
        }

        // await runnableWithMessageHistory.invoke(toolMessages, options);
        // -> Error: Missing value for input variable `caller_name`
      }
    }

Copy link

dosubot bot commented Nov 23, 2024

Hi, @arkodeep3404. I'm Dosu, and I'm helping the LangChain JS team manage their backlog. I'm marking this issue as stale.

Issue Summary:

  • You reported a BadRequestError related to missing response messages for tool_call_id.
  • The issue persists despite updating to the latest version and verifying your code.
  • I suggested ensuring every tool call is followed by a corresponding tool message.
  • Another user, levivoelz, reported a similar issue, indicating a potential widespread problem.

Next Steps:

  • Please confirm if this issue is still relevant with the latest version of LangChain JS. If so, feel free to comment to keep the discussion open.
  • If there is no further activity, this issue will be automatically closed in 7 days.

Thank you for your understanding and contribution!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 23, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 30, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants