Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update comments and documentation #1

Merged
merged 3 commits into from
Sep 18, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# Copy this over:
# cp .env.example .env
# Then modify to suit your needs
ANTHROPIC_API_KEY=...
# Then modify to suit your needs
21 changes: 8 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,23 +16,20 @@ The simple chatbot:

1. Takes a user **message** as input
2. Maintains a history of the conversation
3. Generates a response based on the current message and conversation history
4. Updates the conversation history with the new interaction
3. Returns a placeholder response, updating the conversation history

This template provides a foundation that can be easily customized and extended to create more complex conversational agents.

## Getting Started

Assuming you have already [installed LangGraph Studio](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#download), to set up:

1. Create a `.env` file.
1. Create a `.env` file. This template does not require any environment variables by default, but you will likely want to add some when customizing.

```bash
cp .env.example .env
```

2. Define required API keys in your `.env` file.

<!--
Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
-->
Expand All @@ -41,20 +38,19 @@ Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUA
End setup instructions
-->

2. Open the folder in LangGraph Studio!
3. Customize the code as needed.
4. Open the folder in LangGraph Studio!

## How to customize

1. **Modify the system prompt**: The default system prompt is defined in [configuration.ts](./src/agent/configuration.ts). You can easily update this via configuration in the studio to change the chatbot's personality or behavior.
2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via configuration. Example: `openai/gpt-4-turbo-preview`.
3. **Extend the graph**: The core logic of the chatbot is defined in [graph.ts](./src/agent/graph.ts). You can modify this file to add new nodes, edges, or change the flow of the conversation.
1. **Add an LLM call**: You can select and install a chat model wrapper from [the LangChain.js ecosystem](https://js.langchain.com/docs/integrations/chat/), or use LangGraph.js without LangChain.js.
2. **Extend the graph**: The core logic of the chatbot is defined in [graph.ts](./src/agent/graph.ts). You can modify this file to add new nodes, edges, or change the flow of the conversation.

You can also quickly extend this template by:
You can also extend this template by:

- Adding custom tools or functions to enhance the chatbot's capabilities.
- Adding [custom tools or functions](https://js.langchain.com/docs/how_to/tool_calling) to enhance the chatbot's capabilities.
- Implementing additional logic for handling specific types of user queries or tasks.
- Integrating external APIs or databases to provide more dynamic responses.
- Add retrieval-augmented generation (RAG) capabilities by integrating [external APIs or databases](https://langchain-ai.github.io/langgraphjs/tutorials/rag/langgraph_agentic_rag/) to provide more customized responses.

## Development

Expand All @@ -81,4 +77,3 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
}
}
-->

2 changes: 1 addition & 1 deletion langgraph.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"node_version": "20",
"graphs": {
"agent": "./src/agent.ts:graph"
"agent": "./src/agent/index.ts:graph"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think we should just delete index and import from graph directly?

},
"env": ".env"
}
5 changes: 2 additions & 3 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,12 @@
"test:all": "yarn test && yarn test:int && yarn lint:langgraph"
},
"dependencies": {
"@langchain/core": "^0.3.1",
"@langchain/langgraph": "^0.2.3"
"@langchain/core": "^0.3.2",
"@langchain/langgraph": "^0.2.5"
},
"devDependencies": {
"@eslint/eslintrc": "^3.1.0",
"@eslint/js": "^9.9.1",
"@langchain/openai": "^0.2.7",
"@tsconfig/recommended": "^1.0.7",
"@types/jest": "^29.5.0",
"@typescript-eslint/eslint-plugin": "^5.59.8",
Expand Down
10 changes: 5 additions & 5 deletions src/agent/configuration.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ export interface Configuration {
* Placeholder: you can define custom configuration to change the behavior of
* your graph!
*/
modelName: string;
model: string;
}

export function ensureConfiguration(config?: RunnableConfig): Configuration {
export function ensureConfiguration(config: RunnableConfig): Configuration {
/**
* Create a Configuration instance from a RunnableConfig object.
* Pull a default `configurable` field from a RunnableConfig object.
*/
const configurable = config?.configurable ?? {};
const configurable = config.configurable ?? {};
return {
modelName: configurable.modelName ?? "my-model",
model: configurable.model ?? "my-model",
};
}
75 changes: 63 additions & 12 deletions src/agent/graph.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,31 +5,82 @@
*/

import { StateGraph } from "@langchain/langgraph";
import { StateAnnotation, State } from "./state.js";
import { AIMessage } from "@langchain/core/messages";
import { StateAnnotation } from "./state.js";
import { ensureConfiguration } from "./configuration.js";
import { RunnableConfig } from "@langchain/core/runnables";

// Define nodes, these do the work:

const callModel = async (_state: State, config: RunnableConfig) => {
// Do some work... (e.g. call an LLM)
/**
* Define a node, these do the work of the graph and should have most of the logic.
* Must return a subset of the properties set in StateAnnotation.
* @param state The current state of the graph.
* @param config Extra parameters passed into the state graph.
* @returns Some subset of parameters of the graph state, used to update the state
* for the edges and nodes executed next.
*/
const callModel = async (
state: typeof StateAnnotation.State,
config: RunnableConfig,
): Promise<typeof StateAnnotation.Update> => {
const configuration = ensureConfiguration(config);
/**
* Do some work... (e.g. call an LLM)
* For example, with LangChain you could do something like:
*
* ```bash
* $ npm i @langchain/anthropic
* ```
*
* ```ts
* import { ChatAnthropic } from "@langchain/anthropic";
* const model = new ChatAnthropic({
* model: "claude-3-5-sonnet-20240620",
* apiKey: process.env.ANTHROPIC_API_KEY,
* });
* const res = await model.invoke(state.messages);
* ```
*
* Or, with an SDK directly:
*
* ```bash
* $ npm i openai
* ```
*
* ```ts
* import OpenAI from "openai";
* const openai = new OpenAI({
* apiKey: process.env.OPENAI_API_KEY,
* });
*
* const chatCompletion = await openai.chat.completions.create({
* messages: [{
* role: state.messages[0]._getType(),
* content: state.messages[0].content,
* }],
* model: "gpt-4o-mini",
* });
* ```
*/
console.log("Current state:", state);
return {
messages: [new AIMessage(`Hi, there! This is ${configuration.modelName}`)],
messages: [
{
role: "assistant",
content: `Hi, there! This is ${configuration.model}`,
},
],
};
};

// Define conditional edge logic:

/**
* Routing function: Determines whether to continue research or end the builder.
* This function decides if the gathered information is satisfactory or if more research is needed.
*
* @param state - The current state of the research builder
* @returns Either "callModel" to continue research or END to finish the builder
*/
export const _route = (state: State): "__end__" | "callModel" => {
export const route = (
state: typeof StateAnnotation.State,
): "__end__" | "callModel" => {
if (state.messages.length > 0) {
return "__end__";
}
Expand All @@ -50,8 +101,8 @@ const builder = new StateGraph(StateAnnotation)
// and represent the beginning and end of the builder.
.addEdge("__start__", "callModel")
// Conditional edges optionally route to different nodes (or end)
//
.addConditionalEdges("callModel", _route);
.addConditionalEdges("callModel", route);

export const graph = builder.compile();

graph.name = "New Agent";
77 changes: 43 additions & 34 deletions src/agent/state.ts
Original file line number Diff line number Diff line change
@@ -1,50 +1,59 @@
import { BaseMessage } from "@langchain/core/messages";
import { BaseMessage, BaseMessageLike } from "@langchain/core/messages";
import { Annotation, messagesStateReducer } from "@langchain/langgraph";

/**
* A graph's StateAnnotation defines three main thing:
* A graph's StateAnnotation defines three main things:
* 1. The structure of the data to be passed between nodes (which "channels" to read from/write to and their types)
* 2. Default values each field
* 3. Rducers for the state's. Reducers are functions that determine how to apply updates to the state.
* 2. Default values for each field
* 3. Reducers for the state's. Reducers are functions that determine how to apply updates to the state.
* See [Reducers](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#reducers) for more information.
*/

// This is the primary state of your agent, where you can store any information
export const StateAnnotation = Annotation.Root({
/**
* Messages track the primary execution state of the agent.

Typically accumulates a pattern of:

1. HumanMessage - user input
2. AIMessage with .tool_calls - agent picking tool(s) to use to collect
information
3. ToolMessage(s) - the responses (or errors) from the executed tools

(... repeat steps 2 and 3 as needed ...)
4. AIMessage without .tool_calls - agent responding in unstructured
format to the user.

5. HumanMessage - user responds with the next conversational turn.

(... repeat steps 2-5 as needed ... )

Merges two lists of messages, updating existing messages by ID.

By default, this ensures the state is "append-only", unless the
new message has the same ID as an existing message.

Returns:
A new list of messages with the messages from \`right\` merged into \`left\`.
If a message in \`right\` has the same ID as a message in \`left\`, the
message from \`right\` will replace the message from \`left\`.`
*
* Typically accumulates a pattern of:
*
* 1. HumanMessage - user input
* 2. AIMessage with .tool_calls - agent picking tool(s) to use to collect
* information
* 3. ToolMessage(s) - the responses (or errors) from the executed tools
*
* (... repeat steps 2 and 3 as needed ...)
* 4. AIMessage without .tool_calls - agent responding in unstructured
* format to the user.
*
* 5. HumanMessage - user responds with the next conversational turn.
*
* (... repeat steps 2-5 as needed ... )
*
* Merges two lists of messages or message-like objects with role and content,
* updating existing messages by ID.
*
* Message-like objects are automatically coerced by `messagesStateReducer` into
* LangChain message classes. If a message does not have a given id,
* LangGraph will automatically assign one.
*
* By default, this ensures the state is "append-only", unless the
* new message has the same ID as an existing message.
*
* Returns:
* A new list of messages with the messages from \`right\` merged into \`left\`.
* If a message in \`right\` has the same ID as a message in \`left\`, the
* message from \`right\` will replace the message from \`left\`.`
*/
messages: Annotation<BaseMessage[]>({
messages: Annotation<BaseMessage[], BaseMessageLike[]>({
reducer: messagesStateReducer,
default: () => [],
}),
// Feel free to add additional attributes to your state as needed.
// Common examples include retrieved documents, extracted entities, API connections, etc.
/**
* Feel free to add additional attributes to your state as needed.
* Common examples include retrieved documents, extracted entities, API connections, etc.
*
* For simple fields whose value should be overwritten by the return value of a node,
* you don't need to define a reducer or default.
*/
// additionalField: Annotation<string>,
});

export type State = typeof StateAnnotation.State;
4 changes: 2 additions & 2 deletions tests/agent.test.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import { describe, it, expect } from "@jest/globals";
import { _route } from "../src/agent/graph.js";
import { route } from "../src/agent/graph.js";
describe("Routers", () => {
it("Test route", async () => {
const res = _route({ messages: [] });
const res = route({ messages: [] });
expect(res).toEqual("callModel");
}, 100_000);
});
Loading
Loading