diff --git a/.github/workflows/deploy_docs.yml b/.github/workflows/deploy_docs.yml index 229019b73..6c5ae57ab 100644 --- a/.github/workflows/deploy_docs.yml +++ b/.github/workflows/deploy_docs.yml @@ -21,11 +21,13 @@ concurrency: jobs: deploy: runs-on: ubuntu-latest + env: + GITHUB_TOKEN: ${{ secrets.MKDOCS_GITHUB_TOKEN }} steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - + - name: Set up Python uses: actions/setup-python@v4 with: @@ -36,6 +38,7 @@ jobs: pip install uv uv venv uv pip install -r docs/docs-requirements.txt + uv pip install "git+https://${GITHUB_TOKEN}@github.com/langchain-ai/mkdocs-material-insiders.git" - name: Use Node.js 18.x uses: actions/setup-node@v3 diff --git a/README.md b/README.md index 70c1e71df..bab31d659 100644 --- a/README.md +++ b/README.md @@ -185,7 +185,7 @@ Is there anything else you'd like to know about the weather in New York or any o Initialize the model and tools. - We use `ChatAnthropic` as our LLM. **NOTE:** We need make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for Anthropic tool calling using the `.bindTools()` method. - - We define the tools we want to use -- a weather tool in our case. See the documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to create your own tools. + - We define the tools we want to use -- a weather tool in our case. See the documentation [here](https://js.langchain.com/docs/how_to/custom_tools/) on how to create your own tools. 2.
diff --git a/docs/docs-requirements.txt b/docs/docs-requirements.txt index 4a3c16e0f..077838b5e 100644 --- a/docs/docs-requirements.txt +++ b/docs/docs-requirements.txt @@ -4,7 +4,6 @@ mkdocs-jupyter mkdocs-redirects mkdocs-minify-plugin mkdocs-rss-plugin -mkdocs-material[imaging] mkdocs-typedoc markdown-include -markdown-callouts \ No newline at end of file +markdown-callouts diff --git a/docs/docs/concepts/index.md b/docs/docs/concepts/index.md index e82654eac..44edf515c 100644 --- a/docs/docs/concepts/index.md +++ b/docs/docs/concepts/index.md @@ -15,11 +15,11 @@ The conceptual guide does not cover step-by-step instructions or specific implem ## LangGraph -**High Level** +### High Level - [Why LangGraph?](high_level.md): A high-level overview of LangGraph and its goals. -**Concepts** +### Concepts - [LangGraph Glossary](low_level.md): LangGraph workflows are designed as graphs, with nodes representing different components and edges representing the flow of information between them. This guide provides an overview of the key concepts associated with LangGraph graph primitives. - [Common Agentic Patterns](agentic_concepts.md): An agent uses an LLM to pick its own control flow to solve more complex problems! Agents are a key building block in many LLM applications. This guide explains the different types of agent architectures and how they can be used to control the flow of an application. @@ -41,14 +41,14 @@ The LangGraph Platform offers a few different deployment options described in th * LangGraph is an MIT-licensed open-source library, which we are committed to maintaining and growing for the community. * You can always deploy LangGraph applications on your own infrastructure using the open-source LangGraph project without using LangGraph Platform. -**High Level** +### High Level - [Why LangGraph Platform?](./langgraph_platform.md): The LangGraph platform is an opinionated way to deploy and manage LangGraph applications. This guide provides an overview of the key features and concepts behind LangGraph Platform. - [Deployment Options](./deployment_options.md): LangGraph Platform offers four deployment options: [Self-Hosted Lite](./self_hosted.md#self-hosted-lite), [Self-Hosted Enterprise](./self_hosted.md#self-hosted-enterprise), [bring your own cloud (BYOC)](./bring_your_own_cloud.md), and [Cloud SaaS](./langgraph_cloud.md). This guide explains the differences between these options, and which Plans they are available on. - [Plans](./plans.md): LangGraph Platforms offer three different plans: Developer, Plus, Enterprise. This guide explains the differences between these options, what deployment options are available for each, and how to sign up for each one. - [Template Applications](./template_applications.md): Reference applications designed to help you get started quickly when building with LangGraph. -**Components** +### Components The LangGraph Platform comprises several components that work together to support the deployment and management of LangGraph applications: @@ -58,7 +58,7 @@ The LangGraph Platform comprises several components that work together to suppor - [Python/JS SDK](./sdk.md): The Python/JS SDK provides a programmatic way to interact with deployed LangGraph Applications. - [Remote Graph](../how-tos/use-remote-graph.md): A RemoteGraph allows you to interact with any deployed LangGraph application as though it were running locally. -**LangGraph Server** +### LangGraph Server - [Application Structure](./application_structure.md): A LangGraph application consists of one or more graphs, a LangGraph API Configuration file (`langgraph.json`), a file that specifies dependencies, and environment variables. - [Assistants](./assistants.md): Assistants are a way to save and manage different configurations of your LangGraph applications. @@ -66,9 +66,7 @@ The LangGraph Platform comprises several components that work together to suppor - [Cron Jobs](./langgraph_server.md#cron-jobs): Cron jobs are a way to schedule tasks to run at specific times in your LangGraph application. - [Double Texting](./double_texting.md): Double texting is a common issue in LLM applications where users may send multiple messages before the graph has finished running. This guide explains how to handle double texting with LangGraph Deploy. - -**Deployment Options** - +### Deployment Options - [Self-Hosted Lite](./self_hosted.md): A free (up to 1 million nodes executed), limited version of LangGraph Platform that you can run locally or in a self-hosted manner - [Cloud SaaS](./langgraph_cloud.md): Hosted as part of LangSmith. diff --git a/docs/docs/how-tos/index.md b/docs/docs/how-tos/index.md index aee88e887..e377af638 100644 --- a/docs/docs/how-tos/index.md +++ b/docs/docs/how-tos/index.md @@ -21,8 +21,6 @@ Here you’ll find answers to “How do I...?” types of questions. These guide LangGraph.js is known for being a highly controllable agent framework. These how-to guides show how to achieve that controllability. -- [How to define graph state](define-state.ipynb) -- [How to create subgraphs](subgraph.ipynb) - [How to create branches for parallel execution](branching.ipynb) - [How to create map-reduce branches for parallel execution](map-reduce.ipynb) @@ -34,8 +32,12 @@ LangGraph.js makes it easy to persist state across graph runs. The guides below - [How to add thread-level persistence to subgraphs](subgraph-persistence.ipynb) - [How to add cross-thread persistence](cross-thread-persistence.ipynb) - [How to use a Postgres checkpointer for persistence](persistence-postgres.ipynb) + +### Memory + +LangGraph makes it easy to manage conversation [memory](../concepts/memory.md) in your graph. These how-to guides show how to implement different strategies for that. + - [How to manage conversation history](manage-conversation-history.ipynb) -- [How to view and update past graph state](time-travel.ipynb) - [How to delete messages](delete-messages.ipynb) - [How to add summary of the conversation history](add-summary-conversation-history.ipynb) @@ -46,8 +48,9 @@ These guides cover common examples of that. - [How to add breakpoints](breakpoints.ipynb) - [How to add dynamic breakpoints](dynamic_breakpoints.ipynb) -- [How to wait for user input](wait-user-input.ipynb) - [How to edit graph state](edit-graph-state.ipynb) +- [How to wait for user input](wait-user-input.ipynb) +- [How to view and update past graph state](time-travel.ipynb) - [How to review tool calls](review-tool-calls.ipynb) ### Streaming @@ -55,12 +58,12 @@ These guides cover common examples of that. LangGraph is built to be streaming first. These guides show how to use different streaming modes. -- [How to stream full state of your graph](stream-values.ipynb) +- [How to stream the full state of your graph](stream-values.ipynb) - [How to stream state updates of your graph](stream-updates.ipynb) -- [How to configure multiple streaming modes](stream-multiple.ipynb) - [How to stream LLM tokens](stream-tokens.ipynb) - [How to stream LLM tokens without LangChain models](streaming-tokens-without-langchain.ipynb) - [How to stream custom data](streaming-content.ipynb) +- [How to configure multiple streaming modes](stream-multiple.ipynb) - [How to stream events from within a tool](streaming-events-from-within-tools.ipynb) - [How to stream from the final node](streaming-from-final-node.ipynb) @@ -73,21 +76,25 @@ These guides show how to use different streaming modes. ### Subgraphs +[Subgraphs](../concepts/low_level.md#subgraphs) allow you to reuse an existing graph from another graph. These how-to guides show how to use subgraphs: + - [How to add and use subgraphs](subgraph.ipynb) - [How to view and update state in subgraphs](subgraphs-manage-state.ipynb) - [How to transform inputs and outputs of a subgraph](subgraph-transform-state.ipynb) ### State management +- [How to define graph state](define-state.ipynb) - [Have a separate input and output schema](input_output_schema.ipynb) - [Pass private state between nodes inside the graph](pass_private_state.ipynb) -### Prebuilt ReAct Agent +### Other -- [How to create a ReAct agent](create-react-agent.ipynb) -- [How to add memory to a ReAct agent](react-memory.ipynb) -- [How to add a system prompt to a ReAct agent](react-system-prompt.ipynb) -- [How to add Human-in-the-loop to a ReAct agent](react-human-in-the-loop.ipynb) +- [How to add runtime configuration to your graph](configuration.ipynb) +- [How to add node retries](node-retry-policies.ipynb) +- [How to let agent return tool results directly](dynamically-returning-directly.ipynb) +- [How to have agent respond in structured format](respond-in-format.ipynb) +- [How to manage agent steps](managing-agent-steps.ipynb) ### Prebuilt ReAct Agent @@ -96,14 +103,6 @@ These guides show how to use different streaming modes. - [How to add a system prompt to a ReAct agent](react-system-prompt.ipynb) - [How to add Human-in-the-loop to a ReAct agent](react-human-in-the-loop.ipynb) -### Other - -- [How to add runtime configuration to your graph](configuration.ipynb) -- [How to let agent return tool results directly](dynamically-returning-directly.ipynb) -- [How to have agent respond in structured format](respond-in-format.ipynb) -- [How to manage agent steps](managing-agent-steps.ipynb) -- [How to add node retry policies](node-retry-policies.ipynb) - ## LangGraph Platform This section includes how-to guides for LangGraph Platform. @@ -203,8 +202,9 @@ LangGraph Studio is a built-in UI for visualizing, testing, and debugging your a ## Troubleshooting -The [Error Reference](../troubleshooting/errors/index.md) page contains guides around resolving common errors you may find while building with LangGraph. Errors referenced below will have an `lc_error_code` property corresponding to one of the below codes when they are thrown in code. - -### Errors +These are the guides for resolving common errors you may find while building with LangGraph. Errors referenced below will have an `lc_error_code` property corresponding to one of the below codes when they are thrown in code. -- [Error reference](../troubleshooting/errors/index.md) +- [GRAPH_RECURSION_LIMIT](../troubleshooting/errors/GRAPH_RECURSION_LIMIT.ipynb) +- [INVALID_CONCURRENT_GRAPH_UPDATE](../troubleshooting/errors/INVALID_CONCURRENT_GRAPH_UPDATE.ipynb) +- [INVALID_GRAPH_NODE_RETURN_VALUE](../troubleshooting/errors/INVALID_GRAPH_NODE_RETURN_VALUE.ipynb) +- [MULTIPLE_SUBGRAPHS](../troubleshooting/errors/MULTIPLE_SUBGRAPHS.ipynb) \ No newline at end of file diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 58b205474..86a24556f 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -22,6 +22,7 @@ theme: - navigation.instant - navigation.instant.prefetch - navigation.instant.progress + - navigation.path - navigation.prune - navigation.tabs - navigation.top @@ -65,12 +66,153 @@ plugins: title_link: "/" # optional, default: '/' nav: - # Setting the names of the nav items explicitly due to mkdocs - # how-reload being a bit buggy with the names of the tabs. - - Home: "index.md" - - Tutorials: "tutorials/index.md" - - Concepts: "concepts/index.md" - - "How-to Guides": "how-tos/index.md" + - Home: index.md + - Tutorials: + - tutorials/index.md + - Quick Start: + - Quick Start: tutorials#quick-start + - tutorials/quickstart.ipynb + - Chatbots: + - Chatbots: tutorials/chatbots/customer_support_small_model.ipynb + - RAG: + - RAG: tutorials#rag + - tutorials/rag/langgraph_agentic_rag.ipynb + - tutorials/rag/langgraph_crag.ipynb + - tutorials/rag/langgraph_self_rag.ipynb + - Agent Architectures: + - Agent Architectures: tutorials#agent-architectures + - Multi-Agent Systems: + - Multi-Agent Systems: tutorials#multi-agent-systems + - tutorials/multi_agent/multi_agent_collaboration.ipynb + - tutorials/multi_agent/agent_supervisor.ipynb + - tutorials/multi_agent/hierarchical_agent_teams.ipynb + - Planning Agents: + - Planning Agents: tutorials#planning-agents + - tutorials/plan-and-execute/plan-and-execute.ipynb + - Reflection & Critique: + - Reflection & Critique: tutorials#reflection-critique + - tutorials/reflection/reflection.ipynb + - tutorials/rewoo/rewoo.ipynb + - Evaluation & Analysis: + - Evaluation & Analysis: tutorials#evaluation + - tutorials/chatbot-simulation-evaluation/agent-simulation-evaluation.ipynb + + - How-to Guides: + - how-tos/index.md + - Installation: + - Installation: how-tos#installation + - how-tos/manage-ecosystem-dependencies.ipynb + - how-tos/use-in-web-environments.ipynb + - LangGraph: + - LangGraph: how-tos#langgraph + - Controllability: + - Controllability: how-tos#controllability + - how-tos/map-reduce.ipynb + - how-tos/branching.ipynb + - Persistence: + - Persistence: how-tos#persistence + - how-tos/persistence.ipynb + - how-tos/subgraph-persistence.ipynb + - how-tos/cross-thread-persistence.ipynb + - how-tos/persistence-postgres.ipynb + - Memory: + - Memory: how-tos#memory + - how-tos/manage-conversation-history.ipynb + - how-tos/delete-messages.ipynb + - how-tos/add-summary-conversation-history.ipynb + - Human-in-the-loop: + - Human-in-the-loop: how-tos#human-in-the-loop + - how-tos/breakpoints.ipynb + - how-tos/dynamic_breakpoints.ipynb + - how-tos/edit-graph-state.ipynb + - how-tos/wait-user-input.ipynb + - how-tos/time-travel.ipynb + - how-tos/review-tool-calls.ipynb + - Streaming: + - Streaming: how-tos#streaming + - how-tos/stream-values.ipynb + - how-tos/stream-updates.ipynb + - how-tos/stream-tokens.ipynb + - how-tos/streaming-tokens-without-langchain.ipynb + - how-tos/streaming-content.ipynb + - how-tos/stream-multiple.ipynb + - how-tos/streaming-events-from-within-tools.ipynb + - how-tos/streaming-from-final-node.ipynb + - Tool calling: + - Tool calling: how-tos#tool-calling + - how-tos/tool-calling.ipynb + - how-tos/force-calling-a-tool-first.ipynb + - how-tos/tool-calling-errors.ipynb + - how-tos/pass-run-time-values-to-tools.ipynb + - Subgraphs: + - Subgraphs: how-tos#subgraphs + - how-tos/subgraph.ipynb + - how-tos/subgraphs-manage-state.ipynb + - how-tos/subgraph-transform-state.ipynb + - State Management: + - State Management: how-tos#state-management + - how-tos/define-state.ipynb + - how-tos/input_output_schema.ipynb + - how-tos/pass_private_state.ipynb + - Other: + - Other: how-tos#other + - how-tos/configuration.ipynb + - how-tos/node-retry-policies.ipynb + - how-tos/dynamically-returning-directly.ipynb + - how-tos/respond-in-format.ipynb + - how-tos/managing-agent-steps.ipynb + - Prebuilt ReAct Agent: + - Prebuilt ReAct Agent: how-tos#prebuilt-react-agent + - how-tos/create-react-agent.ipynb + - how-tos/react-memory.ipynb + - how-tos/react-system-prompt.ipynb + - how-tos/react-human-in-the-loop.ipynb + - Troubleshooting: + - Troubleshooting: how-tos#troubleshooting + - troubleshooting/errors/index.md + - troubleshooting/errors/GRAPH_RECURSION_LIMIT.ipynb + - troubleshooting/errors/INVALID_CONCURRENT_GRAPH_UPDATE.ipynb + - troubleshooting/errors/INVALID_GRAPH_NODE_RETURN_VALUE.ipynb + - troubleshooting/errors/MULTIPLE_SUBGRAPHS.ipynb + + - Conceptual Guides: + - concepts/index.md + - LangGraph: + - LangGraph: concepts#langgraph + - concepts/high_level.md + - concepts/low_level.md + - concepts/agentic_concepts.md + - concepts/multi_agent.md + - concepts/human_in_the_loop.md + - concepts/persistence.md + - concepts/memory.md + - concepts/streaming.md + - concepts/faq.md + - LangGraph Platform: + - LangGraph Platform: concepts#langgraph-platform + - High Level: + - High Level: concepts#high-level + - concepts/langgraph_platform.md + - concepts/deployment_options.md + - concepts/plans.md + - concepts/template_applications.md + - Components: + - Components: concepts#components + - concepts/langgraph_server.md + - concepts/langgraph_studio.md + - concepts/langgraph_cli.md + - concepts/sdk.md + - how-tos/use-remote-graph.md + - LangGraph Server: + - LangGraph Server: concepts#langgraph-server + - concepts/application_structure.md + - concepts/assistants.md + - concepts/double_texting.md + - Deployment Options: + - Deployment Options: concepts#deployment-options + - concepts/self_hosted.md + - concepts/langgraph_cloud.md + - concepts/bring_your_own_cloud.md - "Reference": - "reference/index.html" - "Versions": @@ -148,7 +290,6 @@ validation: omitted_files: ignore # absolute_links: warn unrecognized_links: warn - nav: - not_found: warn - links: - not_found: warn + anchors: info + # this is needed to handle headers with anchors for nav + not_found: info diff --git a/docs/overrides/main.html b/docs/overrides/main.html index 7616fba79..e72b9d8bb 100644 --- a/docs/overrides/main.html +++ b/docs/overrides/main.html @@ -34,6 +34,17 @@ color: #1E88E5; } + .md-sidebar { + display: none; + } + + /* Show sidebar on mobile */ + @media screen and (max-width: 1220px) { + .md-sidebar--primary { + display: block; + } + } + .md-typeset a:hover { color: #1565C0; } diff --git a/examples/chatbots/customer_support_small_model.ipynb b/examples/chatbots/customer_support_small_model.ipynb index c95038328..c6747d0c2 100644 --- a/examples/chatbots/customer_support_small_model.ipynb +++ b/examples/chatbots/customer_support_small_model.ipynb @@ -660,7 +660,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "But this will again hit the interrupt becuase `refundAuthorized` is not set. If we update the state to set `refundAuthorized` to true, then resume the graph by running it with the same `thread_id` and passing `null` as the input, execution will continue and the refund will process:" + "But this will again hit the interrupt because `refundAuthorized` is not set. If we update the state to set `refundAuthorized` to true, then resume the graph by running it with the same `thread_id` and passing `null` as the input, execution will continue and the refund will process:" ] }, { diff --git a/examples/how-tos/dynamically-returning-directly.ipynb b/examples/how-tos/dynamically-returning-directly.ipynb index 0061f5a87..16079e8c5 100644 --- a/examples/how-tos/dynamically-returning-directly.ipynb +++ b/examples/how-tos/dynamically-returning-directly.ipynb @@ -149,7 +149,7 @@ "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should support\n", - " [tool calling](https://js.langchain.com/v0.2/docs/concepts/#functiontool-calling).\n", + " [tool calling](https://js.langchain.com/docs/concepts/tool_calling/).\n", "\n", "Note: these model requirements are not requirements for using LangGraph - they\n", "are just requirements for this one example.\n" diff --git a/examples/how-tos/manage-conversation-history.ipynb b/examples/how-tos/manage-conversation-history.ipynb index 5959e860c..8577b962d 100644 --- a/examples/how-tos/manage-conversation-history.ipynb +++ b/examples/how-tos/manage-conversation-history.ipynb @@ -11,8 +11,8 @@ "\n", "Note: this guide focuses on how to do this in LangGraph, where you can fully customize how this is done. If you want a more off-the-shelf solution, you can look into functionality provided in LangChain:\n", "\n", - "- [How to filter messages](https://js.langchain.com/v0.2/docs/how_to/filter_messages/)\n", - "- [How to trim messages](https://js.langchain.com/v0.2/docs/how_to/trim_messages/)" + "- [How to filter messages](https://js.langchain.com/docs/how_to/filter_messages/)\n", + "- [How to trim messages](https://js.langchain.com/docs/how_to/trim_messages/)" ] }, { @@ -361,8 +361,8 @@ "source": [ "In the above example we defined the `filter_messages` function ourselves. We also provide off-the-shelf ways to trim and filter messages in LangChain. \n", "\n", - "- [How to filter messages](https://js.langchain.com/v0.2/docs/how_to/filter_messages/)\n", - "- [How to trim messages](https://js.langchain.com/v0.2/docs/how_to/trim_messages/)" + "- [How to filter messages](https://js.langchain.com/docs/how_to/filter_messages/)\n", + "- [How to trim messages](https://js.langchain.com/docs/how_to/trim_messages/)" ] } ], diff --git a/examples/how-tos/manage-ecosystem-dependencies.ipynb b/examples/how-tos/manage-ecosystem-dependencies.ipynb index 89ca15247..8241ff8ff 100644 --- a/examples/how-tos/manage-ecosystem-dependencies.ipynb +++ b/examples/how-tos/manage-ecosystem-dependencies.ipynb @@ -29,10 +29,9 @@ "```\n", "\n", "`@langchain/core` must be installed separately because it is a peer dependency of `@langchain/langgraph`.\n", - "This is to help package managers resolve a single version of `@langchain/core`. Despite this,\n", - "in some situations, your package manager may resolve multiple versions of core, which can result in unexpected TypeScript errors or other strange behavior.\n", + "This is to help package managers resolve a single version of `@langchain/core`.\n", "\n", - "The best way to guarantee that you only have one version of `@langchain/core` is to add a `\"resolutions\"` or\n", + "Despite this, in some situations, your package manager may resolve multiple versions of core, which can result in unexpected TypeScript errors or other strange behavior. If you need to guarantee that you only have one version of `@langchain/core` is to add a `\"resolutions\"` or\n", "`\"overrides\"` field in your project's `package.json`. The specific field name will depend on your package manager. Here are a few examples:\n", "\n", "
\n", diff --git a/examples/how-tos/persistence.ipynb b/examples/how-tos/persistence.ipynb index ba78b7872..d645a36ec 100644 --- a/examples/how-tos/persistence.ipynb +++ b/examples/how-tos/persistence.ipynb @@ -32,7 +32,7 @@ "
\n", "

Note

\n", "

\n", - " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", + " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", "

\n", "
\n", "\n", @@ -108,7 +108,7 @@ "We will first define the tools we want to use. For this simple example, we will\n", "use create a placeholder search engine. However, it is really easy to create\n", "your own tools - see documentation\n", - "[here](https://js.langchain.com/v0.2/docs/how_to/custom_tools) on how to do\n", + "[here](https://js.langchain.com/docs/how_to/custom_tools) on how to do\n", "that." ] }, @@ -174,12 +174,12 @@ "## Set up the model\n", "\n", "Now we will load the\n", - "[chat model](https://js.langchain.com/v0.2/docs/concepts/#chat-models).\n", + "[chat model](https://js.langchain.com/docs/concepts/#chat-models).\n", "\n", "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should work with\n", - " [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/#passing-tools-to-llms),\n", + " [tool calling](https://js.langchain.com/docs/how_to/tool_calling/#passing-tools-to-llms),\n", " meaning it can return function arguments in its response.\n", "\n", "
\n", diff --git a/examples/how-tos/respond-in-format.ipynb b/examples/how-tos/respond-in-format.ipynb index b8d361aec..95f39a5f2 100644 --- a/examples/how-tos/respond-in-format.ipynb +++ b/examples/how-tos/respond-in-format.ipynb @@ -443,7 +443,7 @@ "source": [ "## Partially streaming JSON\n", "\n", - "If we want to stream the structured output as soon as it's available, we can use the [`.streamEvents()`](https://js.langchain.com/v0.2/docs/how_to/streaming#using-stream-events) method. We'll aggregate emitted `on_chat_model_events` and inspect the name field. As soon as we detect that the model is calling the final output tool, we can start logging the relevant chunks.\n", + "If we want to stream the structured output as soon as it's available, we can use the [`.streamEvents()`](https://js.langchain.com/docs/how_to/streaming#using-stream-events) method. We'll aggregate emitted `on_chat_model_events` and inspect the name field. As soon as we detect that the model is calling the final output tool, we can start logging the relevant chunks.\n", "\n", "Here's an example:" ] diff --git a/examples/how-tos/stream-tokens.ipynb b/examples/how-tos/stream-tokens.ipynb index bb70f13c9..55046f950 100644 --- a/examples/how-tos/stream-tokens.ipynb +++ b/examples/how-tos/stream-tokens.ipynb @@ -8,16 +8,14 @@ "# How to stream LLM tokens from your graph\n", "\n", "In this example, we will stream tokens from the language model powering an\n", - "agent. We will use a ReAct agent as an example. The tl;dr is to use\n", - "[streamEvents](https://js.langchain.com/v0.2/docs/how_to/chat_streaming/#stream-events)\n", - "([API Ref](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#streamEvents)).\n", + "agent. We will use a ReAct agent as an example.\n", "\n", "
\n", "

Note

\n", "

\n", " If you are using a version of @langchain/core < 0.2.3, when calling chat models or LLMs you need to call await model.stream() within your nodes to get token-by-token streaming events, and aggregate final outputs if needed to update the graph state. In later versions of @langchain/core, this occurs automatically, and you can call await model.invoke().\n", "
\n", - " For more on how to upgrade @langchain/core, check out the instructions here.\n", + " For more on how to upgrade @langchain/core, check out the instructions here.\n", "

\n", "
\n", "\n", @@ -27,14 +25,14 @@ "
\n", "

Streaming Support

\n", "

\n", - " Token streaming is supported by many, but not all chat models. Check to see if your LLM integration supports token streaming here (doc). Note that some integrations may support general token streaming but lack support for streaming tool calls.\n", + " Token streaming is supported by many, but not all chat models. Check to see if your LLM integration supports token streaming here (doc). Note that some integrations may support general token streaming but lack support for streaming tool calls.\n", "

\n", "
\n", "\n", "
\n", "

Note

\n", "

\n", - " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent({ llm, tools }) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", + " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent({ llm, tools }) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", "

\n", "
\n", "\n", @@ -101,7 +99,7 @@ "source": [ "## Set up the tools\n", "\n", - "First define the tools you want to use. For this simple example, we'll create a placeholder search engine, but see the documentation [here](https://js.langchain.com/v0.2/docs/how_to/custom_tools) on how to create your own custom tools." + "First define the tools you want to use. For this simple example, we'll create a placeholder search engine, but see the documentation [here](https://js.langchain.com/docs/how_to/custom_tools) on how to create your own custom tools." ] }, { @@ -165,12 +163,12 @@ "source": [ "## Set up the model\n", "\n", - "Now load the [chat model](https://js.langchain.com/v0.2/docs/concepts/#chat-models).\n", + "Now load the [chat model](https://js.langchain.com/docs/concepts/#chat-models).\n", "\n", "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should work with\n", - " [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/#passing-tools-to-llms),\n", + " [tool calling](https://js.langchain.com/docs/how_to/tool_calling/#passing-tools-to-llms),\n", " meaning it can return function arguments in its response.\n", "\n", "
\n", @@ -310,6 +308,13 @@ "\n", "### The stream method\n", "\n", + "
\n", + "

Compatibility

\n", + "

\n", + " This section requires @langchain/langgraph>=0.2.20. For help upgrading, see this guide.\n", + "

\n", + "
\n", + "\n", "For this method, you must be using an LLM that supports streaming as well and enable it when constructing the LLM (e.g. `new ChatOpenAI({ model: \"gpt-4o-mini\", streaming: true })`) or call `.stream` on the internal LLM call." ] }, diff --git a/examples/how-tos/stream-updates.ipynb b/examples/how-tos/stream-updates.ipynb index 88a77d747..5f31c0541 100644 --- a/examples/how-tos/stream-updates.ipynb +++ b/examples/how-tos/stream-updates.ipynb @@ -68,7 +68,7 @@ "We will first define the tools we want to use. For this simple example, we will\n", "use create a placeholder search engine. However, it is really easy to create\n", "your own tools - see documentation\n", - "[here](https://js.langchain.com/v0.2/docs/how_to/custom_tools) on how to do\n", + "[here](https://js.langchain.com/docs/how_to/custom_tools) on how to do\n", "that.\n" ] }, @@ -134,12 +134,12 @@ "## Set up the model\n", "\n", "Now we will load the\n", - "[chat model](https://js.langchain.com/v0.2/docs/concepts/#chat-models).\n", + "[chat model](https://js.langchain.com/docs/concepts/chat_models/).\n", "\n", "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should work with\n", - " [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/#passing-tools-to-llms),\n", + " [tool calling](https://js.langchain.com/docs/how_to/tool_calling/#passing-tools-to-llms),\n", " meaning it can return function arguments in its response.\n", "\n", "
\n", diff --git a/examples/how-tos/stream-values.ipynb b/examples/how-tos/stream-values.ipynb index 613df530f..6349eb5e3 100644 --- a/examples/how-tos/stream-values.ipynb +++ b/examples/how-tos/stream-values.ipynb @@ -68,7 +68,7 @@ "We will first define the tools we want to use. For this simple example, we will\n", "use create a placeholder search engine. However, it is really easy to create\n", "your own tools - see documentation\n", - "[here](https://js.langchain.com/v0.2/docs/how_to/custom_tools) on how to do\n", + "[here](https://js.langchain.com/docs/how_to/custom_tools) on how to do\n", "that.\n" ] }, @@ -134,12 +134,12 @@ "## Set up the model\n", "\n", "Now we will load the\n", - "[chat model](https://js.langchain.com/v0.2/docs/concepts/#chat-models).\n", + "[chat model](https://js.langchain.com/docs/concepts/chat_models/).\n", "\n", "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should work with\n", - " [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/#passing-tools-to-llms),\n", + " [tool calling](https://js.langchain.com/docs/how_to/tool_calling/#passing-tools-to-llms),\n", " meaning it can return function arguments in its response.\n", "\n", "
\n", diff --git a/examples/how-tos/streaming-content.ipynb b/examples/how-tos/streaming-content.ipynb index 06cec4bf9..cae757fd5 100644 --- a/examples/how-tos/streaming-content.ipynb +++ b/examples/how-tos/streaming-content.ipynb @@ -74,7 +74,14 @@ "id": "29814253-ca9b-4844-a8a5-d6b19fbdbdba", "metadata": {}, "source": [ - "## Stream custom data using .stream" + "## Stream custom data using .stream\n", + "\n", + "
\n", + "

Compatibility

\n", + "

\n", + " This section requires @langchain/langgraph>=0.2.20. For help upgrading, see this guide.\n", + "

\n", + "
" ] }, { diff --git a/examples/how-tos/streaming-tokens-without-langchain.ipynb b/examples/how-tos/streaming-tokens-without-langchain.ipynb index 151608ff7..4fba697ed 100644 --- a/examples/how-tos/streaming-tokens-without-langchain.ipynb +++ b/examples/how-tos/streaming-tokens-without-langchain.ipynb @@ -64,7 +64,7 @@ "source": [ "## Calling the model\n", "\n", - "Now, define a method for a LangGraph node that will call the model. It will handle formatting tool calls to and from the model, as well as streaming via [custom callback events](https://js.langchain.com/v0.2/docs/how_to/callbacks_custom_events).\n", + "Now, define a method for a LangGraph node that will call the model. It will handle formatting tool calls to and from the model, as well as streaming via [custom callback events](https://js.langchain.com/docs/how_to/callbacks_custom_events).\n", "\n", "If you are using [LangSmith](https://docs.smith.langchain.com/), you can also wrap the OpenAI client for the same nice tracing you'd get with a LangChain chat model.\n", "\n", @@ -271,7 +271,7 @@ "source": [ "## Streaming tokens\n", "\n", - "And now we can use the [`.streamEvents`](https://js.langchain.com/v0.2/docs/how_to/streaming#using-stream-events) method to get the streamed tokens and tool calls from the OpenAI model:" + "And now we can use the [`.streamEvents`](https://js.langchain.com/docs/how_to/streaming#using-stream-events) method to get the streamed tokens and tool calls from the OpenAI model:" ] }, { diff --git a/examples/how-tos/time-travel.ipynb b/examples/how-tos/time-travel.ipynb index 6f94b87e8..3dc35e824 100644 --- a/examples/how-tos/time-travel.ipynb +++ b/examples/how-tos/time-travel.ipynb @@ -42,7 +42,7 @@ "
\n", "

Note

\n", "

\n", - " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", + " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the createReactAgent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain's AgentExecutor class.\n", "

\n", "
\n", "\n", @@ -118,7 +118,7 @@ "We will first define the tools we want to use. For this simple example, we will\n", "use create a placeholder search engine. However, it is really easy to create\n", "your own tools - see documentation\n", - "[here](https://js.langchain.com/v0.2/docs/how_to/custom_tools) on how to do\n", + "[here](https://js.langchain.com/docs/how_to/custom_tools) on how to do\n", "that.\n" ] }, @@ -184,12 +184,12 @@ "## Set up the model\n", "\n", "Now we will load the\n", - "[chat model](https://js.langchain.com/v0.2/docs/concepts/#chat-models).\n", + "[chat model](https://js.langchain.com/docs/concepts/chat_models/).\n", "\n", "1. It should work with messages. We will represent all agent state in the form\n", " of messages, so it needs to be able to work well with them.\n", "2. It should work with\n", - " [tool calling](https://js.langchain.com/v0.2/docs/how_to/tool_calling/#passing-tools-to-llms),\n", + " [tool calling](https://js.langchain.com/docs/how_to/tool_calling/#passing-tools-to-llms),\n", " meaning it can return function arguments in its response.\n", "\n", "
\n", diff --git a/examples/how-tos/use-in-web-environments.ipynb b/examples/how-tos/use-in-web-environments.ipynb index df7ba263f..05ae51ba6 100644 --- a/examples/how-tos/use-in-web-environments.ipynb +++ b/examples/how-tos/use-in-web-environments.ipynb @@ -83,7 +83,7 @@ "

Caution

\n", "

\n", " If you are using LangGraph.js on the frontend, make sure you are not exposing any private keys!\n", - " For chat models, this means you need to use something like WebLLM\n", + " For chat models, this means you need to use something like WebLLM\n", " that can run client-side without authentication.\n", "

\n", "
\n", @@ -91,10 +91,10 @@ "## Passing config\n", "\n", "The lack of `async_hooks` support in web browsers means that if you are calling\n", - "a [`Runnable`](https://js.langchain.com/v0.2/docs/concepts#interface) within a\n", + "a [`Runnable`](https://js.langchain.com/docs/concepts/runnables/) within a\n", "node (for example, when calling a chat model), you need to manually pass a\n", "`config` object through to properly support tracing,\n", - "[`.streamEvents()`](https://js.langchain.com/v0.2/docs/how_to/streaming#using-stream-events)\n", + "[`.streamEvents()`](https://js.langchain.com/docs/how_to/streaming#using-stream-events)\n", "to stream intermediate steps, and other callback related functionality. This\n", "config object will passed in as the second argument of each node, and should be\n", "used as the second parameter of any `Runnable` method.\n", @@ -317,4 +317,4 @@ }, "nbformat": 4, "nbformat_minor": 2 -} \ No newline at end of file +} diff --git a/examples/multi_agent/hierarchical_agent_teams.ipynb b/examples/multi_agent/hierarchical_agent_teams.ipynb index f7e213657..d11345881 100644 --- a/examples/multi_agent/hierarchical_agent_teams.ipynb +++ b/examples/multi_agent/hierarchical_agent_teams.ipynb @@ -430,7 +430,7 @@ "} from \"@langchain/core/prompts\";\n", "import { JsonOutputToolsParser } from \"langchain/output_parsers\";\n", "import { ChatOpenAI } from \"@langchain/openai\";\n", - "import { Runnable, RunnableConfig } from \"@langchain/core/runnables\";\n", + "import { Runnable } from \"@langchain/core/runnables\";\n", "import { StructuredToolInterface } from \"@langchain/core/tools\";\n", "\n", "const agentMessageModifier = (\n", @@ -456,10 +456,11 @@ " state: any;\n", " agent: Runnable;\n", " name: string;\n", - " config?: RunnableConfig;\n", "}) {\n", - " const { state, agent, name, config } = params;\n", - " const result = await agent.invoke(state, config);\n", + " const { state, agent, name } = params;\n", + " const result = await agent.invoke({\n", + " messages: state.messages,\n", + " });\n", " const lastMessage = result.messages[result.messages.length - 1];\n", " return {\n", " messages: [new HumanMessage({ content: lastMessage.content, name })],\n", @@ -558,7 +559,7 @@ "\n", "const llm = new ChatOpenAI({ modelName: \"gpt-4o\" });\n", "\n", - "const searchNode = (state: typeof ResearchTeamState.State, config?: RunnableConfig) => {\n", + "const searchNode = (state: typeof ResearchTeamState.State) => {\n", " const messageModifier = agentMessageModifier(\n", " \"You are a research assistant who can search for up-to-date info using the tavily search engine.\",\n", " [tavilyTool],\n", @@ -569,10 +570,10 @@ " tools: [tavilyTool],\n", " messageModifier,\n", " })\n", - " return runAgentNode({ state, agent: searchAgent, name: \"Search\", config });\n", + " return runAgentNode({ state, agent: searchAgent, name: \"Search\" });\n", "};\n", "\n", - "const researchNode = (state: typeof ResearchTeamState.State, config?: RunnableConfig) => {\n", + "const researchNode = (state: typeof ResearchTeamState.State) => {\n", " const messageModifier = agentMessageModifier(\n", " \"You are a research assistant who can scrape specified urls for more detailed information using the scrapeWebpage function.\",\n", " [scrapeWebpage],\n", @@ -583,7 +584,7 @@ " tools: [scrapeWebpage],\n", " messageModifier,\n", " })\n", - " return runAgentNode({ state, agent: researchAgent, name: \"WebScraper\", config });\n", + " return runAgentNode({ state, agent: researchAgent, name: \"WebScraper\" });\n", "}\n", "\n", "const supervisorAgent = await createTeamSupervisor(\n", @@ -865,7 +866,7 @@ "source": [ "const docWritingLlm = new ChatOpenAI({ modelName: \"gpt-4o\" });\n", "\n", - "const docWritingNode = (state: typeof DocWritingState.State, config?: RunnableConfig) => {\n", + "const docWritingNode = (state: typeof DocWritingState.State) => {\n", " const messageModifier = agentMessageModifier(\n", " `You are an expert writing a research document.\\nBelow are files currently in your directory:\\n${state.current_files}`,\n", " [writeDocumentTool, editDocumentTool, readDocumentTool],\n", @@ -877,10 +878,10 @@ " messageModifier,\n", " })\n", " const contextAwareDocWriterAgent = prelude.pipe(docWriterAgent);\n", - " return runAgentNode({ state, agent: contextAwareDocWriterAgent, name: \"DocWriter\", config });\n", + " return runAgentNode({ state, agent: contextAwareDocWriterAgent, name: \"DocWriter\" });\n", "}\n", "\n", - "const noteTakingNode = (state: typeof DocWritingState.State, config?: RunnableConfig) => {\n", + "const noteTakingNode = (state: typeof DocWritingState.State) => {\n", " const messageModifier = agentMessageModifier(\n", " \"You are an expert senior researcher tasked with writing a paper outline and\" +\n", " ` taking notes to craft a perfect paper. ${state.current_files}`,\n", @@ -893,12 +894,11 @@ " messageModifier,\n", " })\n", " const contextAwareNoteTakingAgent = prelude.pipe(noteTakingAgent);\n", - " return runAgentNode({ state, agent: contextAwareNoteTakingAgent, name: \"NoteTaker\", config });\n", + " return runAgentNode({ state, agent: contextAwareNoteTakingAgent, name: \"NoteTaker\" });\n", "}\n", "\n", "const chartGeneratingNode = async (\n", " state: typeof DocWritingState.State,\n", - " config?: RunnableConfig,\n", ") => {\n", " const messageModifier = agentMessageModifier(\n", " \"You are a data viz expert tasked with generating charts for a research project.\" +\n", @@ -912,7 +912,7 @@ " messageModifier,\n", " })\n", " const contextAwareChartGeneratingAgent = prelude.pipe(chartGeneratingAgent);\n", - " return runAgentNode({ state, agent: contextAwareChartGeneratingAgent, name: \"ChartGenerator\", config });\n", + " return runAgentNode({ state, agent: contextAwareChartGeneratingAgent, name: \"ChartGenerator\" });\n", "}\n", "\n", "const docTeamMembers = [\"DocWriter\", \"NoteTaker\", \"ChartGenerator\"];\n", @@ -1238,7 +1238,15 @@ "outputs": [], "source": [ "const superGraph = new StateGraph(State)\n", - " .addNode(\"ResearchTeam\", getMessages.pipe(researchChain).pipe(joinGraph))\n", + " .addNode(\"ResearchTeam\", async (input) => {\n", + " const getMessagesResult = await getMessages.invoke(input);\n", + " const researchChainResult = await researchChain.invoke({\n", + " messages: getMessagesResult.messages,\n", + " });\n", + " const joinGraphResult = await joinGraph.invoke({\n", + " messages: researchChainResult.messages,\n", + " });\n", + " })\n", " .addNode(\"PaperWritingTeam\", getMessages.pipe(authoringChain).pipe(joinGraph))\n", " .addNode(\"supervisor\", supervisorNode)\n", " .addEdge(\"ResearchTeam\", \"supervisor\")\n", diff --git a/examples/quickstart.ipynb b/examples/quickstart.ipynb index 1e7736018..996f77f36 100644 --- a/examples/quickstart.ipynb +++ b/examples/quickstart.ipynb @@ -318,7 +318,7 @@ "- give your agent [persistent memory](/langgraphjs/how-tos/persistence/) to continue conversations and debug unexpected behavior\n", "- Put a [human in the loop](/langgraphjs/how-tos/breakpoints/) for actions you want a human to verify\n", "- [Streaming the agent output](/langgraphjs/how-tos/stream-values/) to make your application feel more responsive\n", - "- [Change the AI model in one line of code](https://js.langchain.com/v0.2/docs/how_to/chat_models_universal_init/)\n" + "- [Change the AI model in one line of code](https://js.langchain.com/docs/how_to/chat_models_universal_init/)\n" ] } ], diff --git a/examples/rag/langgraph_adaptive_rag_local.ipynb b/examples/rag/langgraph_adaptive_rag_local.ipynb index 3fff91e10..a5b1bcda6 100644 --- a/examples/rag/langgraph_adaptive_rag_local.ipynb +++ b/examples/rag/langgraph_adaptive_rag_local.ipynb @@ -108,9 +108,9 @@ "documents. The code below uses some of\n", "[Lilian Weng's blog posts](https://lilianweng.github.io/) on LLMs and agents as\n", "a data source, then loads them into a demo\n", - "[`MemoryVectorStore`](https://js.langchain.com/v0.2/docs/integrations/vectorstores/memory)\n", + "[`MemoryVectorStore`](https://js.langchain.com/docs/integrations/vectorstores/memory)\n", "instance. It then creates a\n", - "[retriever](https://js.langchain.com/v0.2/docs/concepts#retrievers) from that\n", + "[retriever](https://js.langchain.com/docs/concepts#retrievers) from that\n", "vector store for later use." ] }, @@ -175,7 +175,7 @@ "if they are not.\n", "\n", "You'll use Ollama's\n", - "[JSON mode](https://js.langchain.com/v0.2/docs/integrations/chat/ollama/#json-mode)\n", + "[JSON mode](https://js.langchain.com/docs/integrations/chat/ollama/#json-mode)\n", "to help keep the output format consistent." ] }, @@ -497,7 +497,7 @@ "### Question rewriter\n", "\n", "Create a question rewriter. This chain performs\n", - "[query analysis](https://js.langchain.com/v0.2/docs/tutorials/query_analysis/)\n", + "[query analysis](https://js.langchain.com/docs/tutorials/query_analysis/)\n", "on the user questions and optimizes them for RAG to help handle difficult\n", "queries." ] @@ -550,7 +550,7 @@ "\n", "Finally, you'll need a web search tool that can handle questions out of scope\n", "from the indexed documents. The code below initializes a\n", - "[Tavily-powered](https://js.langchain.com/v0.2/docs/integrations/tools/tavily_search)\n", + "[Tavily-powered](https://js.langchain.com/docs/integrations/tools/tavily_search)\n", "search tool" ] }, diff --git a/examples/rag/langgraph_crag.ipynb b/examples/rag/langgraph_crag.ipynb index 17da02209..6d889b052 100644 --- a/examples/rag/langgraph_crag.ipynb +++ b/examples/rag/langgraph_crag.ipynb @@ -49,7 +49,7 @@ "### Install dependencies\n", "\n", "```bash\n", - "npm install cheerio zod langchain @langchain/community @langchain/openai @langchain/core @langchain/textsplitters\n", + "npm install cheerio zod langchain @langchain/community @langchain/openai @langchain/core @langchain/textsplitters @langchain/langgraph\n", "```" ] }, diff --git a/libs/checkpoint-mongodb/package.json b/libs/checkpoint-mongodb/package.json index 4c16f852a..ffb3516c4 100644 --- a/libs/checkpoint-mongodb/package.json +++ b/libs/checkpoint-mongodb/package.json @@ -61,7 +61,7 @@ "jest-environment-node": "^29.6.4", "prettier": "^2.8.3", "release-it": "^17.6.0", - "rollup": "^4.23.0", + "rollup": "^4.22.4", "ts-jest": "^29.1.0", "tsx": "^4.7.0", "typescript": "^4.9.5 || ^5.4.5" diff --git a/libs/checkpoint-postgres/package.json b/libs/checkpoint-postgres/package.json index 2a7304a67..b260b5758 100644 --- a/libs/checkpoint-postgres/package.json +++ b/libs/checkpoint-postgres/package.json @@ -61,7 +61,7 @@ "jest-environment-node": "^29.6.4", "prettier": "^2.8.3", "release-it": "^17.6.0", - "rollup": "^4.5.2", + "rollup": "^4.22.4", "ts-jest": "^29.1.0", "tsx": "^4.7.0", "typescript": "^4.9.5 || ^5.4.5" diff --git a/libs/checkpoint-sqlite/package.json b/libs/checkpoint-sqlite/package.json index 58662b027..a4673ca11 100644 --- a/libs/checkpoint-sqlite/package.json +++ b/libs/checkpoint-sqlite/package.json @@ -62,7 +62,7 @@ "jest-environment-node": "^29.6.4", "prettier": "^2.8.3", "release-it": "^17.6.0", - "rollup": "^4.23.0", + "rollup": "^4.22.4", "ts-jest": "^29.1.0", "tsx": "^4.7.0", "typescript": "^4.9.5 || ^5.4.5" diff --git a/libs/checkpoint/package.json b/libs/checkpoint/package.json index b5022a1c1..c22f7b1c3 100644 --- a/libs/checkpoint/package.json +++ b/libs/checkpoint/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/langgraph-checkpoint", - "version": "0.0.11", + "version": "0.0.12", "description": "Library with base interfaces for LangGraph checkpoint savers.", "type": "module", "engines": { @@ -58,7 +58,7 @@ "jest-environment-node": "^29.6.4", "prettier": "^2.8.3", "release-it": "^17.6.0", - "rollup": "^4.23.0", + "rollup": "^4.22.4", "ts-jest": "^29.1.0", "tsx": "^4.7.0", "typescript": "^4.9.5 || ^5.4.5" diff --git a/libs/checkpoint/src/base.ts b/libs/checkpoint/src/base.ts index 5382b4e5f..3512436cd 100644 --- a/libs/checkpoint/src/base.ts +++ b/libs/checkpoint/src/base.ts @@ -8,6 +8,8 @@ import type { } from "./types.js"; import { ERROR, + INTERRUPT, + RESUME, SCHEDULED, type ChannelProtocol, type SendProtocol, @@ -203,4 +205,6 @@ export function maxChannelVersion( export const WRITES_IDX_MAP: Record = { [ERROR]: -1, [SCHEDULED]: -2, + [INTERRUPT]: -3, + [RESUME]: -4, }; diff --git a/libs/checkpoint/src/serde/types.ts b/libs/checkpoint/src/serde/types.ts index aaf429812..a2dfeac08 100644 --- a/libs/checkpoint/src/serde/types.ts +++ b/libs/checkpoint/src/serde/types.ts @@ -1,6 +1,8 @@ export const TASKS = "__pregel_tasks"; export const ERROR = "__error__"; export const SCHEDULED = "__scheduled__"; +export const INTERRUPT = "__interrupt__"; +export const RESUME = "__resume__"; // Mirrors BaseChannel in "@langchain/langgraph" export interface ChannelProtocol< diff --git a/libs/checkpoint/src/types.ts b/libs/checkpoint/src/types.ts index a9edc47ad..90060a042 100644 --- a/libs/checkpoint/src/types.ts +++ b/libs/checkpoint/src/types.ts @@ -15,8 +15,9 @@ export interface CheckpointMetadata { * - "input": The checkpoint was created from an input to invoke/stream/batch. * - "loop": The checkpoint was created from inside the pregel loop. * - "update": The checkpoint was created from a manual state update. + * - "fork": The checkpoint was created as a copy of another checkpoint. */ - source: "input" | "loop" | "update"; + source: "input" | "loop" | "update" | "fork"; /** * The step number of the checkpoint. * -1 for the first "input" checkpoint. diff --git a/libs/langgraph/README.md b/libs/langgraph/README.md index 70c1e71df..bab31d659 100644 --- a/libs/langgraph/README.md +++ b/libs/langgraph/README.md @@ -185,7 +185,7 @@ Is there anything else you'd like to know about the weather in New York or any o Initialize the model and tools. - We use `ChatAnthropic` as our LLM. **NOTE:** We need make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for Anthropic tool calling using the `.bindTools()` method. - - We define the tools we want to use -- a weather tool in our case. See the documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to create your own tools. + - We define the tools we want to use -- a weather tool in our case. See the documentation [here](https://js.langchain.com/docs/how_to/custom_tools/) on how to create your own tools.
2.
diff --git a/libs/langgraph/package.json b/libs/langgraph/package.json index 8ab38958c..68a6ff6df 100644 --- a/libs/langgraph/package.json +++ b/libs/langgraph/package.json @@ -1,6 +1,6 @@ { "name": "@langchain/langgraph", - "version": "0.2.20", + "version": "0.2.23", "description": "LangGraph", "type": "module", "engines": { @@ -31,7 +31,7 @@ "author": "LangChain", "license": "MIT", "dependencies": { - "@langchain/langgraph-checkpoint": "~0.0.10", + "@langchain/langgraph-checkpoint": "~0.0.12", "@langchain/langgraph-sdk": "~0.0.21", "uuid": "^10.0.0", "zod": "^3.23.8" @@ -72,7 +72,7 @@ "pg": "^8.13.0", "prettier": "^2.8.3", "release-it": "^17.6.0", - "rollup": "^4.23.0", + "rollup": "^4.22.4", "ts-jest": "^29.1.0", "tsx": "^4.7.0", "typescript": "^4.9.5 || ^5.4.5", diff --git a/libs/langgraph/src/constants.ts b/libs/langgraph/src/constants.ts index b19e7d3b1..229785469 100644 --- a/libs/langgraph/src/constants.ts +++ b/libs/langgraph/src/constants.ts @@ -1,3 +1,5 @@ +export const MISSING = Symbol.for("__missing__"); + export const INPUT = "__input__"; export const ERROR = "__error__"; export const CONFIG_KEY_SEND = "__pregel_send"; @@ -6,11 +8,13 @@ export const CONFIG_KEY_CHECKPOINTER = "__pregel_checkpointer"; export const CONFIG_KEY_RESUMING = "__pregel_resuming"; export const CONFIG_KEY_TASK_ID = "__pregel_task_id"; export const CONFIG_KEY_STREAM = "__pregel_stream"; +export const CONFIG_KEY_RESUME_VALUE = "__pregel_resume_value"; // this one is part of public API export const CONFIG_KEY_CHECKPOINT_MAP = "checkpoint_map"; export const INTERRUPT = "__interrupt__"; +export const RESUME = "__resume__"; export const RUNTIME_PLACEHOLDER = "__pregel_runtime_placeholder__"; export const RECURSION_LIMIT_DEFAULT = 25; @@ -22,9 +26,11 @@ export const PUSH = "__pregel_push"; export const PULL = "__pregel_pull"; export const TASK_NAMESPACE = "6ba7b831-9dad-11d1-80b4-00c04fd430c8"; +export const NULL_TASK_ID = "00000000-0000-0000-0000-000000000000"; export const RESERVED = [ INTERRUPT, + RESUME, ERROR, TASKS, CONFIG_KEY_SEND, @@ -114,3 +120,17 @@ export type Interrupt = { value: any; when: "during"; }; + +export class Command { + lg_name = "Command"; + + resume: R; + + constructor(args: { resume: R }) { + this.resume = args.resume; + } +} + +export function _isCommand(x: unknown): x is Command { + return typeof x === "object" && !!x && (x as Command).lg_name === "Command"; +} diff --git a/libs/langgraph/src/errors.ts b/libs/langgraph/src/errors.ts index f54c9d6f6..1c0358c1f 100644 --- a/libs/langgraph/src/errors.ts +++ b/libs/langgraph/src/errors.ts @@ -18,6 +18,12 @@ export class BaseLangGraphError extends Error { } } +export class GraphBubbleUp extends BaseLangGraphError { + get is_bubble_up() { + return true; + } +} + export class GraphRecursionError extends BaseLangGraphError { constructor(message?: string, fields?: BaseLangGraphErrorFields) { super(message, fields); @@ -40,7 +46,7 @@ export class GraphValueError extends BaseLangGraphError { } } -export class GraphInterrupt extends BaseLangGraphError { +export class GraphInterrupt extends GraphBubbleUp { interrupts: Interrupt[]; constructor(interrupts?: Interrupt[], fields?: BaseLangGraphErrorFields) { @@ -74,6 +80,10 @@ export class NodeInterrupt extends GraphInterrupt { } } +export function isGraphBubbleUp(e?: Error): e is GraphBubbleUp { + return e !== undefined && (e as GraphBubbleUp).is_bubble_up === true; +} + export function isGraphInterrupt( e?: GraphInterrupt | Error ): e is GraphInterrupt { diff --git a/libs/langgraph/src/interrupt.ts b/libs/langgraph/src/interrupt.ts new file mode 100644 index 000000000..ad033d5ee --- /dev/null +++ b/libs/langgraph/src/interrupt.ts @@ -0,0 +1,18 @@ +import { RunnableConfig } from "@langchain/core/runnables"; +import { AsyncLocalStorageProviderSingleton } from "@langchain/core/singletons"; +import { GraphInterrupt } from "./errors.js"; +import { CONFIG_KEY_RESUME_VALUE, MISSING } from "./constants.js"; + +export function interrupt(value: I): R { + const config: RunnableConfig | undefined = + AsyncLocalStorageProviderSingleton.getRunnableConfig(); + if (!config) { + throw new Error("Called interrupt() outside the context of a graph."); + } + const resume = config.configurable?.[CONFIG_KEY_RESUME_VALUE]; + if (resume !== MISSING) { + return resume as R; + } else { + throw new GraphInterrupt([{ value, when: "during" }]); + } +} diff --git a/libs/langgraph/src/pregel/algo.ts b/libs/langgraph/src/pregel/algo.ts index 7b1d71bc2..e353aa462 100644 --- a/libs/langgraph/src/pregel/algo.ts +++ b/libs/langgraph/src/pregel/algo.ts @@ -42,6 +42,10 @@ import { CHECKPOINT_NAMESPACE_END, PUSH, PULL, + RESUME, + CONFIG_KEY_RESUME_VALUE, + NULL_TASK_ID, + MISSING, } from "../constants.js"; import { PregelExecutableTask, PregelTaskDescription } from "./types.js"; import { EmptyChannelError, InvalidUpdateError } from "../errors.js"; @@ -189,6 +193,8 @@ export function _localWrite( commit(writes); } +const IGNORE = new Set([PUSH, RESUME, INTERRUPT]); + export function _applyWrites>( checkpoint: Checkpoint, channels: Cc, @@ -196,6 +202,10 @@ export function _applyWrites>( // eslint-disable-next-line @typescript-eslint/no-explicit-any getNextVersion?: (version: any, channel: BaseChannel) => any ): Record { + // if no task has triggers this is applying writes from the null task only + // so we don't do anything other than update the channels written to + const bumpStep = tasks.some((task) => task.triggers.length > 0); + // Filter out non instances of BaseChannel const onlyChannels = Object.fromEntries( Object.entries(channels).filter(([_, value]) => isBaseChannel(value)) @@ -240,7 +250,7 @@ export function _applyWrites>( } // Clear pending sends - if (checkpoint.pending_sends) { + if (checkpoint.pending_sends?.length && bumpStep) { checkpoint.pending_sends = []; } @@ -252,7 +262,9 @@ export function _applyWrites>( const pendingWritesByManaged = {} as Record; for (const task of tasks) { for (const [chan, val] of task.writes) { - if (chan === TASKS) { + if (IGNORE.has(chan)) { + // do nothing + } else if (chan === TASKS) { checkpoint.pending_sends.push({ node: (val as Send).node, args: (val as Send).args, @@ -313,14 +325,16 @@ export function _applyWrites>( } // Channels that weren't updated in this step are notified of a new step - for (const chan of Object.keys(onlyChannels)) { - if (!updatedChannels.has(chan)) { - const updated = onlyChannels[chan].update([]); - if (updated && getNextVersion !== undefined) { - checkpoint.channel_versions[chan] = getNextVersion( - maxVersion, - onlyChannels[chan] - ); + if (bumpStep) { + for (const chan of Object.keys(onlyChannels)) { + if (!updatedChannels.has(chan)) { + const updated = onlyChannels[chan].update([]); + if (updated && getNextVersion !== undefined) { + checkpoint.channel_versions[chan] = getNextVersion( + maxVersion, + onlyChannels[chan] + ); + } } } } @@ -350,6 +364,7 @@ export function _prepareNextTasks< Cc extends StrRecord >( checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -363,6 +378,7 @@ export function _prepareNextTasks< Cc extends StrRecord >( checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -376,6 +392,7 @@ export function _prepareNextTasks< Cc extends StrRecord >( checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -393,6 +410,7 @@ export function _prepareNextTasks< const task = _prepareSingleTask( [PUSH, i], checkpoint, + pendingWrites, processes, channels, managed, @@ -410,6 +428,7 @@ export function _prepareNextTasks< const task = _prepareSingleTask( [PULL, name], checkpoint, + pendingWrites, processes, channels, managed, @@ -430,6 +449,7 @@ export function _prepareSingleTask< >( taskPath: [string, string | number], checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -444,6 +464,7 @@ export function _prepareSingleTask< >( taskPath: [string, string | number], checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -458,6 +479,7 @@ export function _prepareSingleTask< >( taskPath: [string, string | number], checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -472,6 +494,7 @@ export function _prepareSingleTask< >( taskPath: [string, string | number], checkpoint: ReadonlyCheckpoint, + pendingWrites: [string, string, unknown][] | undefined, processes: Nn, channels: Cc, managed: ManagedValueMapping, @@ -537,6 +560,9 @@ export function _prepareSingleTask< metadata = { ...metadata, ...proc.metadata }; } const writes: [keyof Cc, unknown][] = []; + const resume = pendingWrites?.find( + (w) => [taskId, NULL_TASK_ID].includes(w[0]) && w[1] === RESUME + ); return { name: packet.node, input: packet.args, @@ -587,6 +613,7 @@ export function _prepareSingleTask< ...configurable[CONFIG_KEY_CHECKPOINT_MAP], [parentNamespace]: checkpoint.id, }, + [CONFIG_KEY_RESUME_VALUE]: resume ? resume[2] : MISSING, checkpoint_id: undefined, checkpoint_ns: taskCheckpointNamespace, }, @@ -661,6 +688,9 @@ export function _prepareSingleTask< metadata = { ...metadata, ...proc.metadata }; } const writes: [keyof Cc, unknown][] = []; + const resume = pendingWrites?.find( + (w) => [taskId, NULL_TASK_ID].includes(w[0]) && w[1] === RESUME + ); const taskCheckpointNamespace = `${checkpointNamespace}${CHECKPOINT_NAMESPACE_END}${taskId}`; return { name, @@ -714,6 +744,7 @@ export function _prepareSingleTask< ...configurable[CONFIG_KEY_CHECKPOINT_MAP], [parentNamespace]: checkpoint.id, }, + [CONFIG_KEY_RESUME_VALUE]: resume ? resume[2] : MISSING, checkpoint_id: undefined, checkpoint_ns: taskCheckpointNamespace, }, diff --git a/libs/langgraph/src/pregel/index.ts b/libs/langgraph/src/pregel/index.ts index cbdf3b1a1..bfbdb21e6 100644 --- a/libs/langgraph/src/pregel/index.ts +++ b/libs/langgraph/src/pregel/index.ts @@ -49,6 +49,7 @@ import { CHECKPOINT_NAMESPACE_END, CONFIG_KEY_STREAM, CONFIG_KEY_TASK_ID, + Command, } from "../constants.js"; import { PregelExecutableTask, @@ -64,6 +65,7 @@ import { GraphRecursionError, GraphValueError, InvalidUpdateError, + isGraphBubbleUp, isGraphInterrupt, } from "../errors.js"; import { @@ -405,6 +407,7 @@ export class Pregel< const nextTasks = Object.values( _prepareNextTasks( saved.checkpoint, + saved.pendingWrites, this.nodes, channels, managed, @@ -585,7 +588,7 @@ export class Pregel< values: Record | unknown, asNode?: keyof Nn | string ): Promise { - const checkpointer = + const checkpointer: BaseCheckpointSaver | undefined = inputConfig.configurable?.[CONFIG_KEY_CHECKPOINTER] ?? this.checkpointer; if (!checkpointer) { throw new GraphValueError("No checkpointer set"); @@ -637,7 +640,7 @@ export class Pregel< let checkpointConfig = patchConfigurable(config, { checkpoint_ns: config.configurable?.checkpoint_ns ?? "", }); - if (saved) { + if (saved?.config.configurable) { checkpointConfig = patchConfigurable(config, saved.config.configurable); } @@ -648,7 +651,21 @@ export class Pregel< createCheckpoint(checkpoint, undefined, step), { source: "update", - step, + step: step + 1, + writes: {}, + parents: saved?.metadata?.parents ?? {}, + }, + {} + ); + return patchCheckpointMap(nextConfig, saved ? saved.metadata : undefined); + } + if (values == null && asNode === "__copy__") { + const nextConfig = await checkpointer.put( + saved?.parentConfig ?? checkpointConfig, + createCheckpoint(checkpoint, undefined, step), + { + source: "fork", + step: step + 1, writes: {}, parents: saved?.metadata?.parents ?? {}, }, @@ -901,10 +918,19 @@ export class Pregel< * @param options.debug Whether to print debug information during execution. */ override async stream( - input: PregelInputType, + input: PregelInputType | Command, options?: Partial> ): Promise> { - return super.stream(input, options); + // The ensureConfig method called internally defaults recursionLimit to 25 if not + // passed directly in `options`. + // There is currently no way in _streamIterator to determine whether this was + // set by by ensureConfig or manually by the user, so we specify the bound value here + // and override if it is passed as an explicit param in `options`. + const config = { + recursionLimit: this.config?.recursionLimit, + ...options, + }; + return super.stream(input, config); } protected async prepareSpecs( @@ -971,7 +997,7 @@ export class Pregel< } override async *_streamIterator( - input: PregelInputType, + input: PregelInputType | Command, options?: Partial> ): AsyncGenerator { const streamSubgraphs = options?.subgraphs; @@ -1103,11 +1129,11 @@ export class Pregel< // Timeouts will be thrown for await (const { task, error } of taskStream) { if (error !== undefined) { - if (isGraphInterrupt(error)) { + if (isGraphBubbleUp(error)) { if (loop.isNested) { throw error; } - if (error.interrupts.length) { + if (isGraphInterrupt(error) && error.interrupts.length) { loop.putWrites( task.id, error.interrupts.map((interrupt) => [INTERRUPT, interrupt]) @@ -1117,13 +1143,11 @@ export class Pregel< loop.putWrites(task.id, [ [ERROR, { message: error.message, name: error.name }], ]); + throw error; } } else { loop.putWrites(task.id, task.writes); } - if (error !== undefined && !isGraphInterrupt(error)) { - throw error; - } } if (debug) { @@ -1221,7 +1245,7 @@ export class Pregel< * @param options.debug Whether to print debug information during execution. */ override async invoke( - input: PregelInputType, + input: PregelInputType | Command, options?: Partial> ): Promise { const streamMode = options?.streamMode ?? "values"; diff --git a/libs/langgraph/src/pregel/io.ts b/libs/langgraph/src/pregel/io.ts index 4cb0aacc1..eb16553db 100644 --- a/libs/langgraph/src/pregel/io.ts +++ b/libs/langgraph/src/pregel/io.ts @@ -1,7 +1,9 @@ import type { PendingWrite } from "@langchain/langgraph-checkpoint"; +import { validate } from "uuid"; + import type { BaseChannel } from "../channels/base.js"; import type { PregelExecutableTask } from "./types.js"; -import { TAG_HIDDEN } from "../constants.js"; +import { Command, NULL_TASK_ID, RESUME, TAG_HIDDEN } from "../constants.js"; import { EmptyChannelError } from "../errors.js"; export function readChannel( @@ -50,6 +52,25 @@ export function readChannels( } } +export function* mapCommand( + cmd: Command +): Generator<[string, string, unknown]> { + if (cmd.resume) { + if ( + typeof cmd.resume === "object" && + !!cmd.resume && + Object.keys(cmd.resume).length && + Object.keys(cmd.resume).every(validate) + ) { + for (const [tid, resume] of Object.entries(cmd.resume)) { + yield [tid, RESUME, resume]; + } + } else { + yield [NULL_TASK_ID, RESUME, cmd.resume]; + } + } +} + /** * Map input chunk to a sequence of pending writes in the form [channel, value]. */ diff --git a/libs/langgraph/src/pregel/loop.ts b/libs/langgraph/src/pregel/loop.ts index 32cb2e7b5..0c86ccc63 100644 --- a/libs/langgraph/src/pregel/loop.ts +++ b/libs/langgraph/src/pregel/loop.ts @@ -22,7 +22,9 @@ import { } from "../channels/base.js"; import { PregelExecutableTask, StreamMode } from "./types.js"; import { + _isCommand, CHECKPOINT_NAMESPACE_SEPARATOR, + Command, CONFIG_KEY_CHECKPOINT_MAP, CONFIG_KEY_READ, CONFIG_KEY_RESUMING, @@ -30,8 +32,8 @@ import { ERROR, INPUT, INTERRUPT, + RESUME, TAG_HIDDEN, - TASKS, } from "../constants.js"; import { _applyWrites, @@ -46,6 +48,7 @@ import { prefixGenerator, } from "../utils.js"; import { + mapCommand, mapInput, mapOutputUpdates, mapOutputValues, @@ -71,14 +74,13 @@ import { LangGraphRunnableConfig } from "./runnable_types.js"; const INPUT_DONE = Symbol.for("INPUT_DONE"); const INPUT_RESUMING = Symbol.for("INPUT_RESUMING"); const DEFAULT_LOOP_LIMIT = 25; -const SPECIAL_CHANNELS = [ERROR, INTERRUPT]; // [namespace, streamMode, payload] export type StreamChunk = [string[], StreamMode, unknown]; export type PregelLoopInitializeParams = { // eslint-disable-next-line @typescript-eslint/no-explicit-any - input?: any; + input?: any | Command; config: RunnableConfig; checkpointer?: BaseCheckpointSaver; outputKeys: string | string[]; @@ -93,7 +95,7 @@ export type PregelLoopInitializeParams = { type PregelLoopParams = { // eslint-disable-next-line @typescript-eslint/no-explicit-any - input?: any; + input?: any | Command; config: RunnableConfig; checkpointer?: BaseCheckpointSaver; checkpoint: Checkpoint; @@ -185,7 +187,7 @@ function createDuplexStream(...streams: IterableReadableWritableStream[]) { export class PregelLoop { // eslint-disable-next-line @typescript-eslint/no-explicit-any - protected input?: any; + protected input?: any | Command; // eslint-disable-next-line @typescript-eslint/no-explicit-any output: any; @@ -227,8 +229,6 @@ export class PregelLoop { protected skipDoneTasks: boolean; - protected taskWritesLeft: number = 0; - protected prevCheckpointConfig: RunnableConfig | undefined; status: @@ -297,7 +297,9 @@ export class PregelLoop { config.configurable[CONFIG_KEY_STREAM] ); } - const skipDoneTasks = config.configurable?.checkpoint_id === undefined; + const skipDoneTasks = config.configurable + ? !("checkpoint_id" in config.configurable) + : true; const isNested = CONFIG_KEY_READ in (config.configurable ?? {}); if ( !isNested && @@ -446,18 +448,6 @@ export class PregelLoop { if (writes.length === 0) { return; } - // adjust taskWritesLeft - const firstChannel = writes[0][0]; - const anyChannelIsSend = writes.find(([channel]) => channel === TASKS); - const alwaysSave = - anyChannelIsSend || SPECIAL_CHANNELS.includes(firstChannel); - if (!alwaysSave && !this.taskWritesLeft) { - return this._outputWrites(taskId, writes); - } else if (firstChannel !== INTERRUPT) { - // INTERRUPT makes us want to save the last task's writes - // so we don't decrement tasksWritesLeft in that case - this.taskWritesLeft -= 1; - } // save writes const pendingWrites: CheckpointPendingWrite[] = writes.map( ([key, value]) => { @@ -480,7 +470,9 @@ export class PregelLoop { if (putWritePromise !== undefined) { this.checkpointerPromises.push(putWritePromise); } - this._outputWrites(taskId, writes); + if (this.tasks) { + this._outputWrites(taskId, writes); + } } _outputWrites(taskId: string, writes: [string, unknown][], cached = false) { @@ -605,6 +597,7 @@ export class PregelLoop { const nextTasks = _prepareNextTasks( this.checkpoint, + this.checkpointPendingWrites, this.nodes, this.channels, this.managed, @@ -619,7 +612,6 @@ export class PregelLoop { } ); this.tasks = nextTasks; - this.taskWritesLeft = Object.values(this.tasks).length - 1; // Produce debug output if (this.checkpointer) { @@ -649,7 +641,7 @@ export class PregelLoop { // if there are pending writes from a previous loop, apply them if (this.skipDoneTasks && this.checkpointPendingWrites.length > 0) { for (const [tid, k, v] of this.checkpointPendingWrites) { - if (k === ERROR || k === INTERRUPT) { + if (k === ERROR || k === INTERRUPT || k === RESUME) { continue; } const task = Object.values(this.tasks).find((t) => t.id === tid); @@ -745,8 +737,24 @@ export class PregelLoop { ) ); this._emit(valuesOutput); - // map inputs to channel updates + } else if (_isCommand(this.input)) { + const writes: { [key: string]: PendingWrite[] } = {}; + // group writes by task id + for (const [tid, key, value] of mapCommand(this.input)) { + if (writes[tid] === undefined) { + writes[tid] = []; + } + writes[tid].push([key, value]); + } + if (Object.keys(writes).length === 0) { + throw new EmptyInputError("Received empty Command input"); + } + // save writes + for (const [tid, ws] of Object.entries(writes)) { + this.putWrites(tid, ws); + } } else { + // map inputs to channel updates const inputWrites = await gatherIterator(mapInput(inputKeys, this.input)); if (inputWrites.length === 0) { throw new EmptyInputError( @@ -755,6 +763,7 @@ export class PregelLoop { } const discardTasks = _prepareNextTasks( this.checkpoint, + this.checkpointPendingWrites, this.nodes, this.channels, this.managed, diff --git a/libs/langgraph/src/pregel/retry.ts b/libs/langgraph/src/pregel/retry.ts index 6e17c987a..60094130e 100644 --- a/libs/langgraph/src/pregel/retry.ts +++ b/libs/langgraph/src/pregel/retry.ts @@ -1,4 +1,4 @@ -import { getSubgraphsSeenSet, isGraphInterrupt } from "../errors.js"; +import { getSubgraphsSeenSet, isGraphBubbleUp } from "../errors.js"; import { PregelExecutableTask } from "./types.js"; import type { RetryPolicy } from "./utils/index.js"; @@ -129,7 +129,7 @@ async function _runWithRetry( } catch (e: any) { error = e; error.pregelTaskId = pregelTask.id; - if (isGraphInterrupt(error)) { + if (isGraphBubbleUp(error)) { break; } if (resolvedRetryPolicy === undefined) { diff --git a/libs/langgraph/src/tests/pregel.test.ts b/libs/langgraph/src/tests/pregel.test.ts index ad593ef09..d7d8bd348 100644 --- a/libs/langgraph/src/tests/pregel.test.ts +++ b/libs/langgraph/src/tests/pregel.test.ts @@ -88,11 +88,13 @@ import { MultipleSubgraphsError, NodeInterrupt, } from "../errors.js"; -import { ERROR, INTERRUPT, PULL, PUSH, Send } from "../constants.js"; +import { Command, ERROR, INTERRUPT, PULL, PUSH, Send } from "../constants.js"; import { ManagedValueMapping } from "../managed/base.js"; import { SharedValue } from "../managed/shared_value.js"; import { MessagesAnnotation } from "../graph/messages_annotation.js"; import { LangGraphRunnableConfig } from "../pregel/runnable_types.js"; +import { initializeAsyncLocalStorageSingleton } from "../setup/async_local_storage.js"; +import { interrupt } from "../interrupt.js"; expect.extend({ toHaveKeyStartingWith(received: object, prefix: string) { @@ -120,6 +122,11 @@ export function runPregelTests( afterAll(teardown); } + beforeAll(() => { + // Will occur naturally if user imports from main `@langchain/langgraph` endpoint. + initializeAsyncLocalStorageSingleton(); + }); + describe("Channel", () => { describe("writeTo", () => { it("should return a ChannelWrite instance with the expected writes", () => { @@ -860,6 +867,7 @@ export function runPregelTests( const taskDescriptions = Object.values( _prepareNextTasks( checkpoint, + [], processes, channels, managed, @@ -988,6 +996,7 @@ export function runPregelTests( const tasks = Object.values( _prepareNextTasks( checkpoint, + [], processes, channels, managed, @@ -1223,7 +1232,6 @@ export function runPregelTests( expect(await app.invoke({ input: 2 })).toEqual({ output: 3 }); }); - it("should invoke two processes and get correct output", async () => { const addOne = jest.fn((x: number): number => x + 1); @@ -2700,10 +2708,9 @@ export function runPregelTests( s: typeof StateAnnotation.State ): Partial => { toolTwoNodeCount += 1; - if (s.market === "DE") { - throw new NodeInterrupt("Just because..."); - } - return { my_key: " all good" }; + const answer: string = + s.market === "DE" ? interrupt("Just because...") : " all good"; + return { my_key: answer }; }; const toolTwoGraph = new StateGraph(StateAnnotation) @@ -2791,6 +2798,21 @@ export function runPregelTests( await gatherIterator(toolTwoCheckpointer.list(thread1, { limit: 2 })) ).slice(-1)[0].config, }); + + // resume execution + expect( + await gatherIterator( + toolTwo.stream(new Command({ resume: " this is great" }), { + configurable: { thread_id: "1" }, + }) + ) + ).toEqual([ + { + tool_two: { + my_key: " this is great", + }, + }, + ]); }); it("should not cancel node on other node interrupted", async () => { @@ -7733,6 +7755,7 @@ export function runPregelTests( subjects: ["cats", "dogs"], jokes: [], }); + await awaitAllCallbacks(); expect(tracer.runs.length).toEqual(1); // check state @@ -8758,6 +8781,33 @@ export function runPregelTests( historyNs.map(sanitizeCheckpoints) ); }); + + it("should pass recursion limit set via .withConfig", async () => { + const StateAnnotation = Annotation.Root({ + prop: Annotation, + }); + const graph = new StateGraph(StateAnnotation) + .addNode("first", async () => { + return { + prop: "foo", + }; + }) + .addNode("second", async () => { + return {}; + }) + .addEdge("__start__", "first") + .addEdge("first", "second") + .compile(); + expect(await graph.invoke({})).toEqual({ + prop: "foo", + }); + const graphWithConfig = graph.withConfig({ + recursionLimit: 1, + }); + await expect(graphWithConfig.invoke({})).rejects.toThrow( + GraphRecursionError + ); + }); } runPregelTests(() => new MemorySaverAssertImmutable()); diff --git a/libs/langgraph/src/web.ts b/libs/langgraph/src/web.ts index 5518c38ca..182513785 100644 --- a/libs/langgraph/src/web.ts +++ b/libs/langgraph/src/web.ts @@ -31,7 +31,8 @@ export { } from "./channels/index.js"; export { type AnnotationRoot as _INTERNAL_ANNOTATION_ROOT } from "./graph/index.js"; export { type RetryPolicy } from "./pregel/utils/index.js"; -export { Send } from "./constants.js"; +export { Send, Command, type Interrupt } from "./constants.js"; +export { interrupt } from "./interrupt.js"; export { MemorySaver, diff --git a/scripts/release_workspace.cjs b/scripts/release_workspace.cjs index c1893a029..9c4f6759d 100644 --- a/scripts/release_workspace.cjs +++ b/scripts/release_workspace.cjs @@ -4,20 +4,41 @@ const fs = require("fs"); const path = require("path"); const { spawn } = require("child_process"); const readline = require("readline"); -const semver = require('semver') +const semver = require("semver"); -const PRIMARY_PROJECTS = ["@langchain/langgraph"]; const RELEASE_BRANCH = "release"; const MAIN_BRANCH = "main"; +/** + * Handles execSync errors and logs them in a readable format. + * @param {string} command + * @param {{ doNotExit?: boolean }} [options] - Optional configuration + * @param {boolean} [options.doNotExit] - Whether or not to exit the process on error + */ +function execSyncWithErrorHandling(command, options = {}) { + try { + execSync( + command, + { stdio: "inherit" } // This will stream output in real-time + ); + } catch (error) { + console.error(error.message); + if (!options.doNotExit) { + process.exit(1); + } + } +} + /** * Get the version of a workspace inside a directory. - * - * @param {string} workspaceDirectory + * + * @param {string} workspaceDirectory * @returns {string} The version of the workspace in the input directory. */ function getWorkspaceVersion(workspaceDirectory) { - const pkgJsonFile = fs.readFileSync(path.join(process.cwd(), workspaceDirectory, "package.json")); + const pkgJsonFile = fs.readFileSync( + path.join(process.cwd(), workspaceDirectory, "package.json") + ); const parsedJSONFile = JSON.parse(pkgJsonFile); return parsedJSONFile.version; } @@ -26,29 +47,41 @@ function getWorkspaceVersion(workspaceDirectory) { * Finds all workspaces in the monorepo and returns an array of objects. * Each object in the return value contains the relative path to the workspace * directory, along with the full package.json file contents. - * + * * @returns {Array<{ dir: string, packageJSON: Record}>} */ function getAllWorkspaces() { const possibleWorkspaceDirectories = ["./libs/*"]; - const allWorkspaces = possibleWorkspaceDirectories.flatMap((workspaceDirectory) => { - if (workspaceDirectory.endsWith("*")) { - // List all folders inside directory, require, and return the package.json. - const allDirs = fs.readdirSync(path.join(process.cwd(), workspaceDirectory.replace("*", ""))); - const subDirs = allDirs.map((dir) => { - return { - dir: `${workspaceDirectory.replace("*", "")}${dir}`, - packageJSON: require(path.join(process.cwd(), `${workspaceDirectory.replace("*", "")}${dir}`, "package.json")) - } - }); - return subDirs; + const allWorkspaces = possibleWorkspaceDirectories.flatMap( + (workspaceDirectory) => { + if (workspaceDirectory.endsWith("*")) { + // List all folders inside directory, require, and return the package.json. + const allDirs = fs.readdirSync( + path.join(process.cwd(), workspaceDirectory.replace("*", "")) + ); + const subDirs = allDirs.map((dir) => { + return { + dir: `${workspaceDirectory.replace("*", "")}${dir}`, + packageJSON: require(path.join( + process.cwd(), + `${workspaceDirectory.replace("*", "")}${dir}`, + "package.json" + )), + }; + }); + return subDirs; + } + const packageJSON = require(path.join( + process.cwd(), + workspaceDirectory, + "package.json" + )); + return { + dir: workspaceDirectory, + packageJSON, + }; } - const packageJSON = require(path.join(process.cwd(), workspaceDirectory, "package.json")); - return { - dir: workspaceDirectory, - packageJSON, - }; - }); + ); return allWorkspaces; } @@ -56,18 +89,24 @@ function getAllWorkspaces() { * Writes the JSON file with the updated dependency version. Accounts * for version prefixes, eg ~, ^, >, <, >=, <=, ||, *. Also skips * versions which are "latest" or "workspace:*". - * - * @param {Array} workspaces - * @param {"dependencies" | "devDependencies" | "peerDependencies"} dependencyType - * @param {string} workspaceName - * @param {string} newVersion + * + * @param {Array} workspaces + * @param {"dependencies" | "devDependencies" | "peerDependencies"} dependencyType + * @param {string} workspaceName + * @param {string} newVersion */ -function updateDependencies(workspaces, dependencyType, workspaceName, newVersion) { +function updateDependencies( + workspaces, + dependencyType, + workspaceName, + newVersion +) { const versionPrefixes = ["~", "^", ">", "<", ">=", "<=", "||", "*"]; const skipVersions = ["latest", "workspace:*"]; workspaces.forEach((workspace) => { - const currentVersion = workspace.packageJSON[dependencyType]?.[workspaceName]; + const currentVersion = + workspace.packageJSON[dependencyType]?.[workspaceName]; if (currentVersion) { const prefix = versionPrefixes.find((p) => currentVersion.startsWith(p)); const shouldSkip = skipVersions.some((v) => currentVersion === v); @@ -75,7 +114,10 @@ function updateDependencies(workspaces, dependencyType, workspaceName, newVersio if (!shouldSkip) { const versionToUpdate = prefix ? `${prefix}${newVersion}` : newVersion; workspace.packageJSON[dependencyType][workspaceName] = versionToUpdate; - fs.writeFileSync(path.join(workspace.dir, "package.json"), JSON.stringify(workspace.packageJSON, null, 2) + "\n"); + fs.writeFileSync( + path.join(workspace.dir, "package.json"), + JSON.stringify(workspace.packageJSON, null, 2) + "\n" + ); } } }); @@ -85,7 +127,7 @@ function updateDependencies(workspaces, dependencyType, workspaceName, newVersio * Runs `release-it` with args in the input package directory, * passing the new version as an argument, along with other * release-it args. - * + * * @param {string} packageDirectory The directory to run yarn release in. * @param {string} npm2FACode The 2FA code for NPM. * @param {string | undefined} tag An optional tag to publish to. @@ -95,11 +137,21 @@ async function runYarnRelease(packageDirectory, npm2FACode, tag) { return new Promise((resolve, reject) => { const workingDirectory = path.join(process.cwd(), packageDirectory); const tagArg = tag ? `--npm.tag=${tag}` : ""; - const args = ["release-it", `--npm.otp=${npm2FACode}`, tagArg, "--config", ".release-it.json"]; - + const args = [ + "release-it", + `--npm.otp=${npm2FACode}`, + tagArg, + "--config", + ".release-it.json", + ]; + console.log(`Running command: "yarn ${args.join(" ")}"`); - const yarnReleaseProcess = spawn("yarn", args, { stdio: "inherit", cwd: workingDirectory }); + // Use 'inherit' for stdio to allow direct CLI interaction + const yarnReleaseProcess = spawn("yarn", args, { + stdio: "inherit", + cwd: workingDirectory, + }); yarnReleaseProcess.on("close", (code) => { if (code === 0) { @@ -110,7 +162,7 @@ async function runYarnRelease(packageDirectory, npm2FACode, tag) { }); yarnReleaseProcess.on("error", (err) => { - reject(err); + reject(`Failed to start process: ${err.message}`); }); }); } @@ -119,7 +171,7 @@ async function runYarnRelease(packageDirectory, npm2FACode, tag) { * Finds all `package.json`'s which contain the input workspace as a dependency. * Then, updates the dependency to the new version, runs yarn install and * commits the changes. - * + * * @param {string} workspaceName The name of the workspace to bump dependencies for. * @param {string} workspaceDirectory The path to the workspace directory. * @param {Array<{ dir: string, packageJSON: Record}>} allWorkspaces @@ -127,7 +179,13 @@ async function runYarnRelease(packageDirectory, npm2FACode, tag) { * @param {string} preReleaseVersion The version of the workspace before it was released. * @returns {void} */ -function bumpDeps(workspaceName, workspaceDirectory, allWorkspaces, tag, preReleaseVersion) { +function bumpDeps( + workspaceName, + workspaceDirectory, + allWorkspaces, + tag, + preReleaseVersion +) { // Read workspace file, get version (edited by release-it), and bump pkgs to that version. let updatedWorkspaceVersion = getWorkspaceVersion(workspaceDirectory); if (!semver.valid(updatedWorkspaceVersion)) { @@ -138,11 +196,15 @@ function bumpDeps(workspaceName, workspaceDirectory, allWorkspaces, tag, preRele // If the updated version is not greater than the pre-release version, // the branch is out of sync. Pull from github and check again. if (!semver.gt(updatedWorkspaceVersion, preReleaseVersion)) { - console.log("Updated version is not greater than the pre-release version. Pulling from github and checking again."); - execSync(`git pull origin ${RELEASE_BRANCH}`); + console.log( + "Updated version is not greater than the pre-release version. Pulling from github and checking again." + ); + execSyncWithErrorHandling(`git pull origin ${RELEASE_BRANCH}`); updatedWorkspaceVersion = getWorkspaceVersion(workspaceDirectory); if (!semver.gt(updatedWorkspaceVersion, preReleaseVersion)) { - console.warn(`Workspace version has not changed in repo. Version in repo: ${updatedWorkspaceVersion}. Exiting.`); + console.warn( + `Workspace version has not changed in repo. Version in repo: ${updatedWorkspaceVersion}. Exiting.` + ); process.exit(0); } } @@ -156,86 +218,157 @@ function bumpDeps(workspaceName, workspaceDirectory, allWorkspaces, tag, preRele versionString = `${updatedWorkspaceVersion}-${tag}`; } - execSync(`git checkout ${MAIN_BRANCH}`); + execSyncWithErrorHandling(`git checkout ${MAIN_BRANCH}`); const newBranchName = `bump-${workspaceName}-to-${versionString}`; console.log(`Checking out new branch: ${newBranchName}`); - execSync(`git checkout -b ${newBranchName}`); + execSyncWithErrorHandling(`git checkout -b ${newBranchName}`); - const allWorkspacesWhichDependOn = allWorkspaces.filter(({ packageJSON }) => + const allWorkspacesWhichDependOn = allWorkspaces.filter(({ packageJSON }) => Object.keys(packageJSON.dependencies ?? {}).includes(workspaceName) ); - const allWorkspacesWhichDevDependOn = allWorkspaces.filter(({ packageJSON }) => - Object.keys(packageJSON.devDependencies ?? {}).includes(workspaceName) + const allWorkspacesWhichDevDependOn = allWorkspaces.filter( + ({ packageJSON }) => + Object.keys(packageJSON.devDependencies ?? {}).includes(workspaceName) ); - const allWorkspacesWhichPeerDependOn = allWorkspaces.filter(({ packageJSON }) => - Object.keys(packageJSON.peerDependencies ?? {}).includes(workspaceName) + const allWorkspacesWhichPeerDependOn = allWorkspaces.filter( + ({ packageJSON }) => + Object.keys(packageJSON.peerDependencies ?? {}).includes(workspaceName) ); // For console log, get all workspaces which depend and filter out duplicates. - const allWhichDependOn = new Set([ - ...allWorkspacesWhichDependOn, - ...allWorkspacesWhichDevDependOn, - ...allWorkspacesWhichPeerDependOn, - ].map(({ packageJSON }) => packageJSON.name)); + const allWhichDependOn = new Set( + [ + ...allWorkspacesWhichDependOn, + ...allWorkspacesWhichDevDependOn, + ...allWorkspacesWhichPeerDependOn, + ].map(({ packageJSON }) => packageJSON.name) + ); if (allWhichDependOn.size !== 0) { - console.log(`Found ${[...allWhichDependOn].length} workspaces which depend on ${workspaceName}. + console.log(`Found ${ + [...allWhichDependOn].length + } workspaces which depend on ${workspaceName}. Workspaces: - ${[...allWhichDependOn].map((name) => name).join("\n- ")} `); // Update packages which depend on the input workspace. - updateDependencies(allWorkspacesWhichDependOn, "dependencies", workspaceName, updatedWorkspaceVersion); - updateDependencies(allWorkspacesWhichDevDependOn, "devDependencies", workspaceName, updatedWorkspaceVersion); - updateDependencies(allWorkspacesWhichPeerDependOn, "peerDependencies", workspaceName, updatedWorkspaceVersion); + updateDependencies( + allWorkspacesWhichDependOn, + "dependencies", + workspaceName, + updatedWorkspaceVersion + ); + updateDependencies( + allWorkspacesWhichDevDependOn, + "devDependencies", + workspaceName, + updatedWorkspaceVersion + ); + updateDependencies( + allWorkspacesWhichPeerDependOn, + "peerDependencies", + workspaceName, + updatedWorkspaceVersion + ); console.log("Updated package.json's! Running yarn install."); try { - execSync(`yarn install`); + execSyncWithErrorHandling(`yarn install`); } catch (_) { - console.log("Yarn install failed. Likely because NPM has not finished publishing the new version. Continuing.") + console.log( + "Yarn install failed. Likely because NPM has not finished publishing the new version. Continuing." + ); } // Add all current changes, commit, push and log branch URL. console.log("Adding and committing all changes."); - execSync(`git add -A`); - execSync(`git commit -m "all[minor]: bump deps on ${workspaceName} to ${versionString}"`); + execSyncWithErrorHandling(`git add -A`); + execSyncWithErrorHandling( + `git commit -m "all[minor]: bump deps on ${workspaceName} to ${versionString}"` + ); console.log("Pushing changes."); - execSync(`git push -u origin ${newBranchName}`); - console.log("🔗 Open %s and merge the bump-deps PR.", `\x1b[34mhttps://github.com/langchain-ai/langgraphjs/compare/${newBranchName}?expand=1\x1b[0m`); + execSyncWithErrorHandling(`git push -u origin ${newBranchName}`); + console.log( + "🔗 Open %s and merge the bump-deps PR.", + `\x1b[34mhttps://github.com/langchain-ai/langgraphjs/compare/${newBranchName}?expand=1\x1b[0m` + ); } else { console.log(`No workspaces depend on ${workspaceName}.`); } } +/** + * Create a commit message for the input workspace and version. + * + * @param {string} workspaceName + * @param {string} version + */ +function createCommitMessage(workspaceName, version) { + const cleanedWorkspaceName = workspaceName.replace("@langchain/", ""); + return `release(${cleanedWorkspaceName}): ${version}`; +} + +/** + * Commits all changes and pushes to the current branch. + * + * @param {string} workspaceName The name of the workspace being released + * @param {string} version The new version being released + * @param {boolean} onlyPush Whether or not to only push the changes, and not commit + * @returns {void} + */ +function commitAndPushChanges(workspaceName, version, onlyPush) { + if (!onlyPush) { + console.log("Committing changes..."); + const commitMsg = createCommitMessage(workspaceName, version); + try { + execSyncWithErrorHandling("git add -A", { doNotExit: true }); + execSyncWithErrorHandling(`git commit -m "${commitMsg}"`, { + doNotExit: true, + }); + } catch (_) { + // No-op. Likely erroring because there are no unstaged changes. + } + } + + console.log("Pushing changes..."); + // Pushes to the current branch + execSyncWithErrorHandling( + "git push -u origin $(git rev-parse --abbrev-ref HEAD)" + ); + console.log("Successfully committed and pushed changes."); +} + /** * Verifies the current branch is main, then checks out a new release branch * and pushes an empty commit. - * + * * @returns {void} * @throws {Error} If the current branch is not main. */ function checkoutReleaseBranch() { const currentBranch = execSync("git branch --show-current").toString().trim(); - if (currentBranch === MAIN_BRANCH) { + if (currentBranch === MAIN_BRANCH || currentBranch === RELEASE_BRANCH) { console.log(`Checking out '${RELEASE_BRANCH}' branch.`); - execSync(`git checkout -B ${RELEASE_BRANCH}`); - execSync(`git push -u origin ${RELEASE_BRANCH}`); + execSyncWithErrorHandling(`git checkout -B ${RELEASE_BRANCH}`); + execSyncWithErrorHandling(`git push -u origin ${RELEASE_BRANCH}`); } else { - throw new Error(`Current branch is not ${MAIN_BRANCH}. Current branch: ${currentBranch}`); + throw new Error( + `Current branch is not ${MAIN_BRANCH} or ${RELEASE_BRANCH}. Current branch: ${currentBranch}` + ); } } /** * Prompts the user for input and returns the input. This is used * for requesting an OTP from the user for NPM 2FA. - * + * * @param {string} question The question to log to the users terminal. * @returns {Promise} The user input. */ async function getUserInput(question) { const rl = readline.createInterface({ input: process.stdin, - output: process.stdout + output: process.stdout, }); return new Promise((resolve) => { @@ -246,13 +379,51 @@ async function getUserInput(question) { }); } +/** + * Checks if there are any uncommitted changes in the git repository. + * + * @returns {boolean} True if there are uncommitted changes, false otherwise + */ +function hasUncommittedChanges() { + try { + // Check for uncommitted changes (both staged and unstaged) + const uncommittedOutput = execSync("git status --porcelain").toString(); + + return uncommittedOutput.length > 0; + } catch (error) { + console.error("Error checking git status:", error); + // If we can't check, better to assume there are changes + return true; + } +} + +/** + * Checks if there are any staged commits in the git repository. + * + * @returns {boolean} True if there are staged changes, false otherwise + */ +function hasStagedChanges() { + try { + // Check for staged but unpushed changes + const unPushedOutput = execSync("git log '@{u}..'").toString(); + + return unPushedOutput.length > 0; + } catch (error) { + console.error("Error checking git status:", error); + // If we can't check, better to assume there are changes + return true; + } +} async function main() { const program = new Command(); program .description("Release a new workspace version to NPM.") .option("--workspace ", "Workspace name, eg @langchain/langgraph") - .option("--bump-deps", "Whether or not to bump other workspaces that depend on this one.") + .option( + "--bump-deps", + "Whether or not to bump other workspaces that depend on this one." + ) .option("--tag ", "Optionally specify a tag to publish to."); program.parse(); @@ -265,10 +436,18 @@ async function main() { throw new Error("--workspace is a required flag."); } + if (hasUncommittedChanges()) { + console.warn( + "[WARNING]: You have uncommitted changes. These will be included in the release commit." + ); + } + // Find the workspace package.json's. const allWorkspaces = getAllWorkspaces(); - const matchingWorkspace = allWorkspaces.find(({ packageJSON }) => packageJSON.name === options.workspace); - + const matchingWorkspace = allWorkspaces.find( + ({ packageJSON }) => packageJSON.name === options.workspace + ); + if (!matchingWorkspace) { throw new Error(`Could not find workspace ${options.workspace}`); } @@ -278,30 +457,34 @@ async function main() { // Run build, lint, tests console.log("Running build, lint, and tests."); - execSync(`yarn turbo:command run --filter ${options.workspace} build lint test --concurrency 1`); + execSyncWithErrorHandling( + `yarn turbo:command run --filter ${options.workspace} build lint test --concurrency 1` + ); console.log("Successfully ran build, lint, and tests."); - // Only run export tests for primary projects. - if (PRIMARY_PROJECTS.includes(options.workspace.trim())) { - // Run export tests. - // LangChain must be built before running export tests. - console.log("Building '@langchain/langgraph' and running export tests."); - execSync(`yarn run turbo:command build --filter=@langchain/langgraph`); - execSync(`yarn run test:exports:docker`); - console.log("Successfully built @langchain/langgraph, and tested exports."); - } else { - console.log("Skipping export tests for non primary project."); - } - - const npm2FACode = await getUserInput("Please enter your NPM 2FA authentication code:"); + const npm2FACode = await getUserInput( + "Please enter your NPM 2FA authentication code:" + ); const preReleaseVersion = getWorkspaceVersion(matchingWorkspace.dir); // Run `release-it` on workspace await runYarnRelease(matchingWorkspace.dir, npm2FACode, options.tag); - + + const hasStaged = hasStagedChanges(); + const hasUnCommitted = hasUncommittedChanges(); + if (hasStaged || hasUnCommitted) { + const updatedVersion = getWorkspaceVersion(matchingWorkspace.dir); + // Only push and do not commit if there are staged changes and no uncommitted changes + const onlyPush = hasStaged && !hasUnCommitted; + commitAndPushChanges(options.workspace, updatedVersion, onlyPush); + } + // Log release branch URL - console.log("🔗 Open %s and merge the release PR.", `\x1b[34mhttps://github.com/langchain-ai/langgraphjs/compare/release?expand=1\x1b[0m`); + console.log( + "🔗 Open %s and merge the release PR.", + `\x1b[34mhttps://github.com/langchain-ai/langgraphjs/compare/release?expand=1\x1b[0m` + ); // If `bump-deps` flag is set, find all workspaces which depend on the input workspace. // Then, update their package.json to use the new version of the input workspace. @@ -315,6 +498,9 @@ async function main() { preReleaseVersion ); } -}; +} -main(); +main().catch((error) => { + console.error(error); + process.exit(1); +}); diff --git a/yarn.lock b/yarn.lock index a6489e2b6..d219972df 100644 --- a/yarn.lock +++ b/yarn.lock @@ -2256,7 +2256,7 @@ __metadata: mongodb: ^6.8.0 prettier: ^2.8.3 release-it: ^17.6.0 - rollup: ^4.23.0 + rollup: ^4.22.4 ts-jest: ^29.1.0 tsx: ^4.7.0 typescript: ^4.9.5 || ^5.4.5 @@ -2294,7 +2294,7 @@ __metadata: pg: ^8.12.0 prettier: ^2.8.3 release-it: ^17.6.0 - rollup: ^4.5.2 + rollup: ^4.22.4 ts-jest: ^29.1.0 tsx: ^4.7.0 typescript: ^4.9.5 || ^5.4.5 @@ -2332,7 +2332,7 @@ __metadata: jest-environment-node: ^29.6.4 prettier: ^2.8.3 release-it: ^17.6.0 - rollup: ^4.23.0 + rollup: ^4.22.4 ts-jest: ^29.1.0 tsx: ^4.7.0 typescript: ^4.9.5 || ^5.4.5 @@ -2393,7 +2393,7 @@ __metadata: languageName: unknown linkType: soft -"@langchain/langgraph-checkpoint@workspace:*, @langchain/langgraph-checkpoint@workspace:libs/checkpoint, @langchain/langgraph-checkpoint@~0.0.10": +"@langchain/langgraph-checkpoint@workspace:*, @langchain/langgraph-checkpoint@workspace:libs/checkpoint, @langchain/langgraph-checkpoint@~0.0.12": version: 0.0.0-use.local resolution: "@langchain/langgraph-checkpoint@workspace:libs/checkpoint" dependencies: @@ -2418,7 +2418,7 @@ __metadata: jest-environment-node: ^29.6.4 prettier: ^2.8.3 release-it: ^17.6.0 - rollup: ^4.23.0 + rollup: ^4.22.4 ts-jest: ^29.1.0 tsx: ^4.7.0 typescript: ^4.9.5 || ^5.4.5 @@ -2448,7 +2448,7 @@ __metadata: "@langchain/anthropic": ^0.3.5 "@langchain/community": ^0.3.9 "@langchain/core": ^0.3.16 - "@langchain/langgraph-checkpoint": ~0.0.10 + "@langchain/langgraph-checkpoint": ~0.0.12 "@langchain/langgraph-checkpoint-postgres": "workspace:*" "@langchain/langgraph-checkpoint-sqlite": "workspace:*" "@langchain/langgraph-sdk": ~0.0.21 @@ -2478,7 +2478,7 @@ __metadata: pg: ^8.13.0 prettier: ^2.8.3 release-it: ^17.6.0 - rollup: ^4.23.0 + rollup: ^4.22.4 ts-jest: ^29.1.0 tsx: ^4.7.0 typescript: ^4.9.5 || ^5.4.5