Caching LLM chats and embeddings with Elasticsearch #19334
giacbrd
started this conversation in
Show and tell
Replies: 2 comments 1 reply
-
cc @joemcelroy |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks for your contribution! Happy for you to add to the Elasticsearch partner package if you want, it might be more discoverable then and be part of our documentation. We can help review too. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
@GabrieleGhisleni and I (both from SpazioDati) are developing a LLM cache integration and a store for caching embeddings on Elasticsearch.
The llm-elasticsearch-cache project has reached a stable status and we would like to contribute to the LangChain codebase. As stated in the readme, the two classes are tools for exploiting Elasticsearch, for example as a searchable storage of cached data.
Before starting a pull request, we would like to understand if the contribution is reasonable and where it should be placed. We have noticed that all the stuff regarding Elasticsearch is moving to this package but the cache module contains all the available cache integrations. Is it fine to keep the Elasticsearch LLM cache in the cache module instead the partner package?
The documentation about contributing recommends to directly contact a maintainer for any doubts, so we would like to ask @efriis about instructions on this (given you are very active on PRs), thanks!
Beta Was this translation helpful? Give feedback.
All reactions