Skip to content

Commit

Permalink
docs: update readme / docs intro page (#3082)
Browse files Browse the repository at this point in the history
  • Loading branch information
ccurme authored Jan 21, 2025
2 parents 802e6df + ca6f8e2 commit 24bc0c0
Show file tree
Hide file tree
Showing 2 changed files with 244 additions and 144 deletions.
194 changes: 122 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,25 +12,48 @@
## Overview

[LangGraph](https://langchain-ai.github.io/langgraph/) is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from DAG-based solutions. As a very low-level framework, it provides fine-grained control over both the flow and state of your application, crucial for creating reliable agents. Additionally, LangGraph includes built-in persistence, enabling advanced human-in-the-loop and memory features.
[LangGraph](https://langchain-ai.github.io/langgraph/) is a library for building
stateful, multi-actor applications with LLMs, used to create agent and multi-agent
workflows. Check out an introductory tutorial [here](https://langchain-ai.github.io/langgraph/tutorials/introduction/).


LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.

[LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform) is infrastructure for deploying LangGraph agents. It is a commercial solution for deploying agentic applications to production, built on the open-source LangGraph framework. The LangGraph Platform consists of several components that work together to support the development, deployment, debugging, and monitoring of LangGraph applications: [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server) (APIs), [LangGraph SDKs](https://langchain-ai.github.io/langgraph/concepts/sdk) (clients for the APIs), [LangGraph CLI](https://langchain-ai.github.io/langgraph/concepts/langgraph_cli) (command line tool for building the server), [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio) (UI/debugger),
### Why use LangGraph?

LangGraph provides fine-grained control over both the flow and state of your
agent applications. It implements a central
[persistence layer](https://langchain-ai.github.io/langgraph/concepts/persistence/),
enabling features that are common to most agent architectures:

- **Memory**: LangGraph persists arbitrary aspects of your application's state,
supporting memory of conversations and other updates within and across user
interactions;
- **Human-in-the-loop**: Because state is checkpointed, execution can be interrupted
and resumed, allowing for decisions, validation, and corrections at key stages via
human input.

Standardizing these components allows individuals and teams to focus on the behavior
of their agent, instead of its supporting infrastructure.

To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available for free [here](https://academy.langchain.com/courses/intro-to-langgraph).
Through [LangGraph Platform](#langgraph-platform), LangGraph also provides tooling for
the development, deployment, debugging, and monitoring of your applications.

### Key Features
LangGraph integrates seamlessly with
[LangChain](https://python.langchain.com/docs/introduction/) and
[LangSmith](https://docs.smith.langchain.com/) (but does not require them).

- **Cycles and Branching**: Implement loops and conditionals in your apps.
- **Persistence**: Automatically save state after each step in the graph. Pause and resume the graph execution at any point to support error recovery, human-in-the-loop workflows, time travel and more.
- **Human-in-the-Loop**: Interrupt graph execution to approve or edit next action planned by the agent.
- **Streaming Support**: Stream outputs as they are produced by each node (including token streaming).
- **Integration with LangChain**: LangGraph integrates seamlessly with [LangChain](https://github.com/langchain-ai/langchain/) and [LangSmith](https://docs.smith.langchain.com/) (but does not require them).
To learn more about LangGraph, check out our first LangChain Academy
course, *Introduction to LangGraph*, available for free
[here](https://academy.langchain.com/courses/intro-to-langgraph).

### LangGraph Platform

LangGraph Platform is a commercial solution for deploying agentic applications to production, built on the open-source LangGraph framework.
[LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform) is infrastructure for deploying LangGraph agents. It is a commercial solution for deploying agentic applications to production, built on the open-source LangGraph framework. The LangGraph Platform consists of several components that work together to support the development, deployment, debugging, and monitoring of LangGraph applications: [LangGraph Server](https://langchain-ai.github.io/langgraph/concepts/langgraph_server) (APIs), [LangGraph SDKs](https://langchain-ai.github.io/langgraph/concepts/sdk) (clients for the APIs), [LangGraph CLI](https://langchain-ai.github.io/langgraph/concepts/langgraph_cli) (command line tool for building the server), and [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio) (UI/debugger).

See deployment options [here](https://langchain-ai.github.io/langgraph/concepts/deployment_options/)
(includes a free tier).

Here are some common issues that arise in complex deployments, which LangGraph Platform addresses:

- **Streaming support**: LangGraph Server provides [multiple streaming modes](https://langchain-ai.github.io/langgraph/concepts/streaming) optimized for various application needs
Expand Down Expand Up @@ -103,6 +126,20 @@ final_state["messages"][-1].content
```
"Based on the search results, I can tell you that the current weather in San Francisco is:\n\nTemperature: 60 degrees Fahrenheit\nConditions: Foggy\n\nSan Francisco is known for its microclimates and frequent fog, especially during the summer months. The temperature of 60°F (about 15.5°C) is quite typical for the city, which tends to have mild temperatures year-round. The fog, often referred to as "Karl the Fog" by locals, is a characteristic feature of San Francisco\'s weather, particularly in the mornings and evenings.\n\nIs there anything else you\'d like to know about the weather in San Francisco or any other location?"
```

Now when we pass the same <code>"thread_id"</code>, the conversation context is retained via the saved state (i.e. stored list of messages)

```python
final_state = app.invoke(
{"messages": [{"role": "user", "content": "what about ny"}]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content
```

```
"Based on the search results, I can tell you that the current weather in New York City is:\n\nTemperature: 90 degrees Fahrenheit (approximately 32.2 degrees Celsius)\nConditions: Sunny\n\nThis weather is quite different from what we just saw in San Francisco. New York is experiencing much warmer temperatures right now. Here are a few points to note:\n\n1. The temperature of 90°F is quite hot, typical of summer weather in New York City.\n2. The sunny conditions suggest clear skies, which is great for outdoor activities but also means it might feel even hotter due to direct sunlight.\n3. This kind of weather in New York often comes with high humidity, which can make it feel even warmer than the actual temperature suggests.\n\nIt's interesting to see the stark contrast between San Francisco's mild, foggy weather and New York's hot, sunny conditions. This difference illustrates how varied weather can be across different parts of the United States, even on the same day.\n\nIs there anything else you'd like to know about the weather in New York or any other location?"
```
</details>

> [!TIP]
Expand Down Expand Up @@ -198,82 +235,95 @@ final_state = app.invoke(
)
final_state["messages"][-1].content
```
</details>

Now when we pass the same `"thread_id"`, the conversation context is retained via the saved state (i.e. stored list of messages)
<b>Step-by-step Breakdown</b>:

```python
final_state = app.invoke(
{"messages": [{"role": "user", "content": "what about ny"}]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content
```

```
"Based on the search results, I can tell you that the current weather in New York City is:\n\nTemperature: 90 degrees Fahrenheit (approximately 32.2 degrees Celsius)\nConditions: Sunny\n\nThis weather is quite different from what we just saw in San Francisco. New York is experiencing much warmer temperatures right now. Here are a few points to note:\n\n1. The temperature of 90°F is quite hot, typical of summer weather in New York City.\n2. The sunny conditions suggest clear skies, which is great for outdoor activities but also means it might feel even hotter due to direct sunlight.\n3. This kind of weather in New York often comes with high humidity, which can make it feel even warmer than the actual temperature suggests.\n\nIt's interesting to see the stark contrast between San Francisco's mild, foggy weather and New York's hot, sunny conditions. This difference illustrates how varied weather can be across different parts of the United States, even on the same day.\n\nIs there anything else you'd like to know about the weather in New York or any other location?"
```

### Step-by-step Breakdown

1. <details>
<summary>Initialize the model and tools.</summary>

- we use `ChatAnthropic` as our LLM. **NOTE:** we need make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI tool calling using the `.bind_tools()` method.
- we define the tools we want to use - a search tool in our case. It is really easy to create your own tools - see documentation here on how to do that [here](https://python.langchain.com/docs/modules/agents/tools/custom_tools).
</details>

2. <details>
<summary>Initialize graph with state.</summary>

- we initialize graph (`StateGraph`) by passing state schema (in our case `MessagesState`)
- `MessagesState` is a prebuilt state schema that has one attribute -- a list of LangChain `Message` objects, as well as logic for merging the updates from each node into the state
</details>

3. <details>
<summary>Define graph nodes.</summary>

There are two main nodes we need:
<details>
<summary>Initialize the model and tools.</summary>
<ul>
<li>
We use <code>ChatAnthropic</code> as our LLM. <strong>NOTE:</strong> we need to make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI tool calling using the <code>.bind_tools()</code> method.
</li>
<li>
We define the tools we want to use - a search tool in our case. It is really easy to create your own tools - see documentation here on how to do that <a href="https://python.langchain.com/docs/modules/agents/tools/custom_tools">here</a>.
</li>
</ul>
</details>

- The `agent` node: responsible for deciding what (if any) actions to take.
- The `tools` node that invokes tools: if the agent decides to take an action, this node will then execute that action.
</details>
<details>
<summary>Initialize graph with state.</summary>

4. <details>
<summary>Define entry point and graph edges.</summary>
<ul>
<li>We initialize graph (<code>StateGraph</code>) by passing state schema (in our case <code>MessagesState</code>)</li>
<li><code>MessagesState</code> is a prebuilt state schema that has one attribute -- a list of LangChain <code>Message</code> objects, as well as logic for merging the updates from each node into the state.</li>
</ul>
</details>

First, we need to set the entry point for graph execution - `agent` node.
<details>
<summary>Define graph nodes.</summary>

Then we define one normal and one conditional edge. Conditional edge means that the destination depends on the contents of the graph's state (`MessageState`). In our case, the destination is not known until the agent (LLM) decides.
There are two main nodes we need:

- Conditional edge: after the agent is called, we should either:
- a. Run tools if the agent said to take an action, OR
- b. Finish (respond to the user) if the agent did not ask to run tools
- Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next
</details>
<ul>
<li>The <code>agent</code> node: responsible for deciding what (if any) actions to take.</li>
<li>The <code>tools</code> node that invokes tools: if the agent decides to take an action, this node will then execute that action.</li>
</ul>
</details>

5. <details>
<summary>Compile the graph.</summary>
<details>
<summary>Define entry point and graph edges.</summary>

- When we compile the graph, we turn it into a LangChain [Runnable](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface), which automatically enables calling `.invoke()`, `.stream()` and `.batch()` with your inputs
- We can also optionally pass checkpointer object for persisting state between graph runs, and enabling memory, human-in-the-loop workflows, time travel and more. In our case we use `MemorySaver` - a simple in-memory checkpointer
</details>
First, we need to set the entry point for graph execution - <code>agent</code> node.

6. <details>
<summary>Execute the graph.</summary>
Then we define one normal and one conditional edge. Conditional edge means that the destination depends on the contents of the graph's state (<code>MessagesState</code>). In our case, the destination is not known until the agent (LLM) decides.

1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, `"agent"`.
2. The `"agent"` node executes, invoking the chat model.
3. The chat model returns an `AIMessage`. LangGraph adds this to the state.
4. Graph cycles the following steps until there are no more `tool_calls` on `AIMessage`:
<ul>
<li>Conditional edge: after the agent is called, we should either:
<ul>
<li>a. Run tools if the agent said to take an action, OR</li>
<li>b. Finish (respond to the user) if the agent did not ask to run tools</li>
</ul>
</li>
<li>Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next</li>
</ul>
</details>

- If `AIMessage` has `tool_calls`, `"tools"` node executes
- The `"agent"` node executes again and returns `AIMessage`
<details>
<summary>Compile the graph.</summary>

<ul>
<li>
When we compile the graph, we turn it into a LangChain
<a href="https://python.langchain.com/v0.2/docs/concepts/#runnable-interface">Runnable</a>,
which automatically enables calling <code>.invoke()</code>, <code>.stream()</code> and <code>.batch()</code>
with your inputs
</li>
<li>
We can also optionally pass checkpointer object for persisting state between graph runs, and enabling memory,
human-in-the-loop workflows, time travel and more. In our case we use <code>MemorySaver</code> -
a simple in-memory checkpointer
</li>
</ul>
</details>

5. Execution progresses to the special `END` value and outputs the final state.
And as a result, we get a list of all our chat messages as output.
</details>
<details>
<summary>Execute the graph.</summary>

<ol>
<li>LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, <code>"agent"</code>.</li>
<li>The <code>"agent"</code> node executes, invoking the chat model.</li>
<li>The chat model returns an <code>AIMessage</code>. LangGraph adds this to the state.</li>
<li>Graph cycles the following steps until there are no more <code>tool_calls</code> on <code>AIMessage</code>:
<ul>
<li>If <code>AIMessage</code> has <code>tool_calls</code>, <code>"tools"</code> node executes</li>
<li>The <code>"agent"</code> node executes again and returns <code>AIMessage</code></li>
</ul>
</li>
<li>Execution progresses to the special <code>END</code> value and outputs the final state. And as a result, we get a list of all our chat messages as output.</li>
</ol>
</details>

</details>

## Documentation

Expand Down
Loading

0 comments on commit 24bc0c0

Please sign in to comment.