Langgraph Supervisor and Agents (multi-agent) with memory/session #5305
-
Checked other resources
Commit to Help
Example Codeconst supervisorChain = formattedPrompt
.pipe(
llm.bind({
tools: [toolDef],
tool_choice: { type: "function", function: { name: "route" } },
})
)
.pipe(new JsonOutputToolsParser())
// select the first one
.pipe((x) => x[0].args);
interface AgentStateChannels {
messages: {
value: (x: BaseMessage[], y: BaseMessage[]) => BaseMessage[];
default: () => BaseMessage[];
};
next: string;
}
// This defines the agent state
const agentStateChannels: AgentStateChannels = {
messages: {
value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
default: () => [],
},
next: "initialValueForNext", // Replace 'initialValueForNext' with your initial value if needed
};
const researcherAgent = await createAgent(
llm,
[searchTool],
"You are a web researcher. You may use the searchTool to search the web for important information, so the blogger in your team can create a blog."
);
const researcherNode = async (state: any, config: any) => {
const result = await researcherAgent.invoke(state, config);
return {
messages: [
new HumanMessage({ content: result.output, name: "researcher" }),
],
};
};
const consolidatorAgent = await createAgent(
llm,
[placeholderTool],
"You are consolidattion agent, using all the content provide from the blogger and image generator, consolidate everything into a final out that can be displayed with headings and paragraphs and images in HTML"
);
const consolidatorNode = async (state: any, config: any) => {
const result = await consolidatorAgent.invoke(state, config);
return {
messages: [
new HumanMessage({ content: result.output, name: "consolidator" }),
],
};
};
const streamResults = graph.stream(
{
messages: [
new HumanMessage({
content: `Write a blog about ${topic}, and provide an image.`,
}),
],
},
{ recursionLimit: 100 }
);
for await (const output of await streamResults) {
if (!output?.__end__) {
console.log(output);
console.log("----");
}
} DescriptionI have used the example at https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/agent_supervisor.ipynb, to create my own version of a supervisor and agent connection and this is running on an express server, however when simultaneous calls are made everything seems to break. The code provided is a small snippet of the whole thing. Is there a way to make Langgraph session based? System Infonode: v18.18.0 |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
To handle simultaneous calls and manage sessions effectively in your Langgraph setup running on an Express server, consider the following approach:
Here's a simplified code snippet demonstrating how to integrate these concepts into an Express server: import express from 'express';
import { AsyncLocalStorage } from 'async_hooks';
import { Redis } from "ioredis";
import { BufferMemory, RedisChatMessageHistory } from "@langchain/community/stores/message/ioredis";
import { ConversationChain } from "langchain/chains";
const asyncLocalStorage = new AsyncLocalStorage();
const app = express();
const port = 3000;
const client = new Redis({
// Configuration for Redis with Sentinel support
});
app.get('/query', async (req, res) => {
asyncLocalStorage.run(new Map(), async () => {
const memory = new BufferMemory({
chatHistory: new RedisChatMessageHistory({
sessionId: new Date().toISOString(), // Unique identifier for the conversation
client,
}),
});
const model = new ChatOpenAI({ /* Model configuration */ });
const chain = new ConversationChain({ llm: model, memory });
const input = req.query.input;
try {
const response = await chain.invoke({ input });
res.json({ output: response.text });
} catch (error) {
res.status(500).send("Error processing your request");
}
});
});
app.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
}); This setup ensures that each request is handled in an isolated context with its own memory management, leveraging Redis for session persistence across requests. Adjust the Redis configuration and model settings as needed for your specific use case.
|
Beta Was this translation helpful? Give feedback.
-
UPDATE: |
Beta Was this translation helpful? Give feedback.
UPDATE:
I have seen the updated documentation; it looks like the checkpointers and memory saver work in the latest version of LangGraph however it changes this current code a lot