Skip to content

Commit

Permalink
Merge branch 'main' into nc/21jan/parent-command-merge-state
Browse files Browse the repository at this point in the history
  • Loading branch information
vbarda committed Jan 23, 2025
2 parents c0773de + 38bbe67 commit 533dac8
Show file tree
Hide file tree
Showing 22 changed files with 1,756 additions and 201 deletions.
58 changes: 13 additions & 45 deletions docs/docs/concepts/high_level.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,26 @@
# Why LangGraph?

LLMs are extremely powerful, particularly when connected to other systems such as a retriever or APIs. This is why many LLM applications use a control flow of steps before and / or after LLM calls. As an example [RAG](https://github.com/langchain-ai/rag-from-scratch) performs retrieval of relevant documents to a question, and passes those documents to an LLM in order to ground the response. Often a control flow of steps before and / or after an LLM is called a "chain." Chains are a popular paradigm for programming with LLMs and offer a high degree of reliability; the same set of steps runs with each chain invocation.
## LLM applications

However, we often want LLM systems that can pick their own control flow! This is one definition of an [agent](https://blog.langchain.dev/what-is-an-agent/): an agent is a system that uses an LLM to decide the control flow of an application. Unlike a chain, an agent gives an LLM some degree of control over the sequence of steps in the application. Examples of using an LLM to decide the control of an application:
LLMs make it possible to embed intelligence into a new class of applications. There are many patterns for building applications that use LLMs. [Workflows](https://www.anthropic.com/research/building-effective-agents) have scaffolding of predefined code paths around LLM calls. LLMs can direct the control flow through these predefined code paths, which some consider to be an "[agentic system](https://www.anthropic.com/research/building-effective-agents)". In other cases, it's possible to remove this scaffolding, creating autonomous agents that can [plan](https://huyenchip.com/2025/01/07/agents.html), take actions via [tool calls](https://python.langchain.com/docs/concepts/tool_calling/), and directly respond [to the feedback from their own actions](https://research.google/blog/react-synergizing-reasoning-and-acting-in-language-models/) with further actions.

- Using an LLM to route between two potential paths
- Using an LLM to decide which of many tools to call
- Using an LLM to decide whether the generated answer is sufficient or more work is need
![Agent Workflow](img/agent_workflow.png)

There are many different types of [agent architectures](https://blog.langchain.dev/what-is-a-cognitive-architecture/) to consider, which give an LLM varying levels of control. On one extreme, a router allows an LLM to select a single step from a specified set of options and, on the other extreme, a fully autonomous long-running agent may have complete freedom to select any sequence of steps that it wants for a given problem.
## What LangGraph provides

![Agent Types](img/agent_types.png)
LangGraph provides low-level supporting infrastructure that sits underneath *any* workflow or agent. It does not abstract prompts or architecture, and provides three central benefits:

Several concepts are utilized in many agent architectures:
### Persistence

- [Tool calling](agentic_concepts.md#tool-calling): this is often how LLMs make decisions
- Action taking: often times, the LLMs' outputs are used as the input to an action
- [Memory](agentic_concepts.md#memory): reliable systems need to have knowledge of things that occurred
- [Planning](agentic_concepts.md#planning): planning steps (either explicit or implicit) are useful for ensuring that the LLM, when making decisions, makes them in the highest fidelity way.
LangGraph has a [persistence layer](https://langchain-ai.github.io/langgraph/concepts/persistence/), which offers a number of benefits:

## Challenges
- [Memory](https://langchain-ai.github.io/langgraph/concepts/memory/): LangGraph persists arbitrary aspects of your application's state, supporting memory of conversations and other updates within and across user interactions;
- [Human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/): Because state is checkpointed, execution can be interrupted and resumed, allowing for decisions, validation, and corrections via human input.

In practice, there is often a trade-off between control and reliability. As we give LLMs more control, the application often become less reliable. This can be due to factors such as LLM non-determinism and / or errors in selecting tools (or steps) that the agent uses (takes).
### Streaming

![Agent Challenge](img/challenge.png)
LangGraph also provides support for [streaming](../how-tos/index.md#streaming) workflow / agent state to the user (or developer) over the course of execution. LangGraph supports streaming of both events ([such as feedback from a tool call](../how-tos/stream-updates.ipynb)) and [tokens from LLM calls](../how-tos/streaming-tokens.ipynb) embedded in an application.

## Core Principles
### Debugging and Deployment

The motivation of LangGraph is to help bend the curve, preserving higher reliability as we give the agent more control over the application. We'll outline a few specific pillars of LangGraph that make it well suited for building reliable agents.

![Langgraph](img/langgraph.png)

**Controllability**

LangGraph gives the developer a high degree of [control](../how-tos/index.md#controllability) by expressing the flow of the application as a set of nodes and edges. All nodes can access and modify a common state (memory). The control flow of the application can set using edges that connect nodes, either deterministically or via conditional logic.

**Persistence**

LangGraph gives the developer many options for [persisting](../how-tos/index.md#persistence) graph state using short-term or long-term (e.g., via a database) memory.

**Human-in-the-Loop**

The persistence layer enables several different [human-in-the-loop](../how-tos/index.md#human-in-the-loop) interaction patterns with agents; for example, it's possible to pause an agent, review its state, edit it state, and approve a follow-up step.

**Streaming**

LangGraph comes with first class support for [streaming](../how-tos/index.md#streaming), which can expose state to the user (or developer) over the course of agent execution. LangGraph supports streaming of both events ([like a tool call being taken](../how-tos/stream-updates.ipynb)) as well as of [tokens that an LLM may emit](../how-tos/streaming-tokens.ipynb).

## Debugging

Once you've built a graph, you often want to test and debug it. [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file) is a specialized IDE for visualization and debugging of LangGraph applications.

![Langgraph Studio](img/lg_studio.png)

## Deployment

Once you have confidence in your LangGraph application, many developers want an easy path to deployment. [LangGraph Platform](../concepts/index.md#langgraph-platform) offers a range of options for deploying LangGraph graphs.
LangGraph provides an easy onramp for testing, debugging, and deploying applications via [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/). This includes [Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/), an IDE that enables visualization, interaction, and debugging of workflows or agents. This also includes numerous [options](https://langchain-ai.github.io/langgraph/tutorials/deployment/) for deployment.
Binary file added docs/docs/concepts/img/agent_workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 14 additions & 2 deletions docs/docs/how-tos/deploy-self-hosted.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,20 @@ If you would like to deploy LangGraph Cloud on Kubernetes, you can use this [Hel

You will eventually need to pass in the following environment variables to the LangGraph Deploy server:

- `REDIS_URI`: Connection details to a Redis instance. Redis will be used as a pub-sub broker to enable streaming real time output from background runs.
- `DATABASE_URI`: Postgres connection details. Postgres will be used to store assistants, threads, runs, persist thread state and long term memory, and to manage the state of the background task queue with 'exactly once' semantics.
- `REDIS_URI`: Connection details to a Redis instance. Redis will be used as a pub-sub broker to enable streaming real time output from background runs. The value of `REDIS_URI` must be a valid [Redis connection URI](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url).

!!! Note "Shared Redis Instance"
Multiple self-hosted deployments can share the same Redis instance. For example, for `Deployment A`, `REDIS_URI` can be set to `redis://<hostname_1>:<port>/1` and for `Deployment B`, `REDIS_URI` can be set to `redis://<hostname_1>:<port>/2`.

`1` and `2` are different database numbers within the same instance, but `<hostname_1>` is shared. **The same database number cannot be used for separate deployments**.

- `DATABASE_URI`: Postgres connection details. Postgres will be used to store assistants, threads, runs, persist thread state and long term memory, and to manage the state of the background task queue with 'exactly once' semantics. The value of `DATABASE_URI` must be a valid [Postgres connection URI](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS).

!!! Note "Shared Postgres Instance"
Multiple self-hosted deployments can share the same Postgres instance. For example, for `Deployment A`, `DATABASE_URI` can be set to `postgres://<user>:<password>@/<database_name_1>?host=<hostname_1>` and for `Deployment B`, `DATABASE_URI` can be set to `postgres://<user>:<password>@/<database_name_2>?host=<hostname_1>`.

`<database_name_1>` and `database_name_2` are different databases within the same instance, but `<hostname_1>` is shared. **The same database cannot be used for separate deployments**.

- `LANGSMITH_API_KEY`: (If using [Self-Hosted Lite](../concepts/deployment_options.md#self-hosted-lite)) LangSmith API key. This will be used to authenticate ONCE at server start up.
- `LANGGRAPH_CLOUD_LICENSE_KEY`: (If using [Self-Hosted Enterprise](../concepts/deployment_options.md#self-hosted-enterprise)) LangGraph Platform license key. This will be used to authenticate ONCE at server start up.
- `LANGCHAIN_ENDPOINT`: To send traces to a [self-hosted LangSmith](https://docs.smith.langchain.com/self_hosting) instance, set `LANGCHAIN_ENDPOINT` to the hostname of the self-hosted LangSmith instance.
Expand Down
1 change: 1 addition & 0 deletions docs/docs/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ New to LangGraph or LLM app development? Read this material to get up and runnin
## Get Started 🚀 {#quick-start}

- [LangGraph Quickstart](introduction.ipynb): Build a chatbot that can use tools and keep track of conversation history. Add human-in-the-loop capabilities and explore how time-travel works.
- [LangGraph Cheatsheet For Common Workflows](workflows.ipynb): Overview of the most common workflows and agent architectures in LangGraph.
- [LangGraph Server Quickstart](langgraph-platform/local-server.md): Launch a LangGraph server locally and interact with it using REST API and LangGraph Studio Web UI.
- [LangGraph Template Quickstart](../concepts/template_applications.md): Start building with LangGraph Platform using a template application.
- [Deploy with LangGraph Cloud Quickstart](../cloud/quick_start.md): Deploy a LangGraph app using LangGraph Cloud.
Expand Down
1,240 changes: 1,240 additions & 0 deletions docs/docs/tutorials/workflows.ipynb

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -296,6 +296,7 @@ nav:
- Quick Start:
- Quick Start: tutorials#quick-start
- tutorials/introduction.ipynb
- tutorials/workflows.ipynb
- tutorials/langgraph-platform/local-server.md
- cloud/quick_start.md
- Chatbots:
Expand Down
3 changes: 2 additions & 1 deletion libs/langgraph/langgraph/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
"""The last (maybe virtual) node in graph-style Pregel."""
SELF = sys.intern("__self__")
"""The implicit branch that handles each node's Control values."""
PREVIOUS = sys.intern("__previous__")

# --- Reserved write keys ---
INPUT = sys.intern("__input__")
Expand Down Expand Up @@ -78,7 +79,7 @@
# holds a callback to be called when a node is finished
CONFIG_KEY_SCRATCHPAD = sys.intern("__pregel_scratchpad")
# holds a mutable dict for temporary storage scoped to the current task
CONFIG_KEY_END = sys.intern("__pregel_previous")
CONFIG_KEY_PREVIOUS = sys.intern("__pregel_previous")
# holds the previous return value from a stateful Pregel graph.

# --- Other constants ---
Expand Down
Loading

0 comments on commit 533dac8

Please sign in to comment.