Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LangChain Integration #605

Closed
MichaelMartinez opened this issue Mar 27, 2023 · 13 comments
Closed

LangChain Integration #605

MichaelMartinez opened this issue Mar 27, 2023 · 13 comments
Labels
enhancement New feature or request stale

Comments

@MichaelMartinez
Copy link

Description

A next level feature of this project would be integration of LangChain into the interface. This could be accomplished in many ways via separate tab, script box, directly into the notebook. The costs of running LangChain with OAI get big really fast. Integration of this library with local models would dramatically cut down on costs.

Additional Context - from LangChain

Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.

This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:

❓ Question Answering over specific documents

Documentation
End-to-end Example: Question Answering over Notion Database
💬 Chatbots

Documentation
End-to-end Example: Chat-LangChain
🤖 Agents

Documentation
End-to-end Example: GPT+WolframAlpha

@MichaelMartinez MichaelMartinez added the enhancement New feature or request label Mar 27, 2023
@oobabooga
Copy link
Owner

I do not understand langchain, but I can see that many people use it. Can you explain what its use case is?

@dibrale
Copy link
Contributor

dibrale commented Mar 29, 2023

I would love to see LangChain support. The application I'd be most interested in is chatbot memory. As it stands, I've written a memory extension that does something resembling their ConversationSummaryBufferMemory, but it's really janky and slow. It would be nice not to have to reinvent that particular wheel.

@jfryton
Copy link
Contributor

jfryton commented Mar 30, 2023

I'll let ChatGPT (Browsing) make a few points:

LangChain is a software library that helps developers enhance the capabilities of AI language models. It allows developers to create applications that use language models in combination with other sources of information, making the AI models smarter and more effective.

Integrating LangChain into a text-generation web interface (web UI) can provide several benefits:

  • Improved Question-Answering: LangChain can use specific documents to provide more accurate answers to questions asked by users.
  • Intelligent Agents: LangChain allows the creation of AI agents that can interact with external sources of knowledge, like WolframAlpha, to provide better responses.
  • Chatbot Memory: LangChain can give chatbots the ability to remember past interactions, resulting in more relevant responses.

@kagevazquez
Copy link

I do not understand langchain, but I can see that many people use it. Can you explain what its use case is?

Sequential Multiple input output extension/LLM abstraction

Chain 1 (input) chain 1 (output) -> chain 2 (input) chain 2 (output) -> chain 3 (input) chain 3 (output)

It should allow you to format/ingest data or even connect to the internet

Basically DIY gpt4 plug-ins

@tensiondriven
Copy link
Contributor

DIY gpt4 plug-ins

Beyond DIY, I see it more as a generalized API for models to talk to each other, talk to the outside world, and be talked to.

Regarding the confusy but with sequences, I think the idea is for a model to be able to prompt itself, so as to get around issues with context length. In that case, having a model talk to itself is no different than one model talking to another, from the model's perspective, since each request is stateless.

I think this will be huge (and powerful, and dangerous) for LLM's in the medium-term. In the short-term, I think it's still so new that if we had it, I'm not sure how many people would use it. I also wonder if the spec is stable enough that it makes sense to work against it. It certainly could be.

I think one could write langchain as an extension with

  1. the ability to initiate updates to existing sessions (we can listen to them now)
  2. the ability to distinguish which session a request is coming from (i think the backend treats all requests as from a single session, though gradio maintains separate browser/ui sessions for each)
  3. there's probably a need to get a hook for "new session" and "session closed" similar to what the websockets see, probably exposed through gradio.

I commented on this ticket which seems to be asking for "langchain lite", which we already have 90% of, as far as I can tell

@tensiondriven
Copy link
Contributor

This fellow is making some great tutorial videos on LangChain, which could be a good introduction. It looks hecka powerful. The third or fourth video in the series uses ReAct prompts to ask the model to query a CSV using langchain and draw a conclusion. It's pretty cool!

@knoopx
Copy link

knoopx commented Apr 6, 2023

@oobabooga langchain is essentially a prompt generation and execution framework. allows to do things like re-writting and re-evaluating the conversation history to perform external data ingestion or auto-summarizing the history to allow larger context data. problem is it superseeds many parts of this project codebase.

@fblissjr
Copy link

@MichaelMartinez I was about to start working on my own lightweight integration for langchain and other tooling in the ecosystem - happy to work together on something if there's interest from you and anyone else in the community

@Wingie
Copy link

Wingie commented Apr 11, 2023

We have some basic langchain integration here
https://github.com/seijihariki/text-generation-webui/blob/langchain/modules/lcagent/vicunaagent.py
The problem im facing is that the 13/7b models find it really hard to follow the prompt for langchain.. I keep getting a cannot parse llm response when i try..
You dont need to start from Scratch with langchain integration, im hoping to test with the 30b model to see if its better suited to the react flow or worst case train a lora for it to follow the prompts better.
See this discussion -
LAION-AI/Open-Assistant#2193

@Moordazzer
Copy link

if this can help https://github.com/ausboss/Local-LLM-Langchain

@barshag
Copy link

barshag commented May 14, 2023

Another usefull feature request is to text-webui from langchain

@Fusseldieb
Copy link

Fusseldieb commented Aug 11, 2023

+1 Also interested in LangChain!

Just played around with it and it is really cool.

It basically gives you a framework to deal with LLMs. For instance: Memory, OutputParsers and such stuff.
Instead of rewriting your LLM with Mustache or any other library, you'd just use LangChain. Also, instead of parsing the output with JSON and such stuff, just use LangChain. It has everything you need. Also, you can chain actions together (output of something into input of any other thing), making Chain of thoughts and similar really easy.

@github-actions github-actions bot added the stale label Nov 26, 2023
Copy link

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests