Skip to content

Commit

Permalink
fix elastic rag template in playground (langchain-ai#12682)
Browse files Browse the repository at this point in the history
- a few instructions in the readme (load_documents -> ingest.py)
- added docker run command for local elastic
- adds input type definition to render playground properly
  • Loading branch information
efriis authored and xieqihui committed Nov 21, 2023
1 parent b48c789 commit 7e78462
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 6 deletions.
3 changes: 2 additions & 1 deletion templates/chat-bot-feedback/chat_bot_feedback/chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,6 @@ def format_chat_history(chain_input: dict) -> dict:
# with the new `tool.langserve.export_attr`
chain = (
(format_chat_history | _prompt | _model | StrOutputParser())
.with_types(input_type=ChainInput)
# This is to add the evaluators as "listeners"
# and to customize the name of the chain.
# Any chain that accepts a compatible input type works here.
Expand All @@ -180,3 +179,5 @@ def format_chat_history(chain_input: dict) -> dict:
],
)
)

chain = chain.with_types(input_type=ChainInput)
11 changes: 8 additions & 3 deletions templates/rag-elasticsearch/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

# rag-elasticsearch

This template performs RAG using ElasticSearch.
This template performs RAG using [ElasticSearch](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).

It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.

Expand All @@ -19,7 +19,12 @@ export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
For local development with Docker, use:

```bash
export ES_URL = "http://localhost:9200"
export ES_URL="http://localhost:9200"
```

And run an Elasticsearch instance in Docker with
```bash
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
```

## Usage
Expand Down Expand Up @@ -83,7 +88,7 @@ runnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
For loading the fictional workplace documents, run the following command from the root of this repository:

```bash
python ./data/load_documents.py
python ingest.py
```

However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
14 changes: 12 additions & 2 deletions templates/rag-elasticsearch/rag_elasticsearch/chain.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
from operator import itemgetter
from typing import List, Tuple
from typing import List, Optional, Tuple

from langchain.chat_models import ChatOpenAI
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.schema import format_document
from langchain.schema import BaseMessage, format_document
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnableMap, RunnablePassthrough
from langchain.vectorstores.elasticsearch import ElasticsearchStore
from pydantic import BaseModel, Field

from .connection import es_connection_details
from .prompts import CONDENSE_QUESTION_PROMPT, DOCUMENT_PROMPT, LLM_CONTEXT_PROMPT
Expand Down Expand Up @@ -41,6 +42,13 @@ def _format_chat_history(chat_history: List[Tuple]) -> str:
return buffer


class ChainInput(BaseModel):
chat_history: Optional[List[BaseMessage]] = Field(
description="Previous chat messages."
)
question: str = Field(..., description="The question to answer.")


_inputs = RunnableMap(
standalone_question=RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
Expand All @@ -56,3 +64,5 @@ def _format_chat_history(chat_history: List[Tuple]) -> str:
}

chain = _inputs | _context | LLM_CONTEXT_PROMPT | llm | StrOutputParser()

chain = chain.with_types(input_type=ChainInput)

0 comments on commit 7e78462

Please sign in to comment.