-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This is a brilliant idea. #6
Comments
If the API URL can be specified for local LLM's or via litellm that would be awesome 👍 |
I agree that using local LLMs would push this to the next level. |
@lrvl thanks for mentioining LiteLLM - we make this pretty easy to add: https://github.com/BerriAI/litellm let me know if i can help with anything |
I was racking my brain and went over my GH stars and remembered this: https://github.com/keldenl/gpt-llama.cpp "gpt-llama.cpp is an API wrapper around llama.cpp. It runs a local API server that simulates OpenAI's API GPT endpoints but uses local llama-based models to process requests. It is designed to be a drop-in replacement for GPT-based applications, meaning that any apps created for use with GPT-3.5 or GPT-4 can work with llama.cpp instead." That should bridge the gap between using OpenAI and your own models with memgpt. Will, after the code is written. |
It's such a problem wrangling vectors- removing, adding, expiring, and what not. Would be cool if we could use local models. I'm sick of drinking from OpenAI's faucet.
Sorry for spamming your issues.
no issue.
The text was updated successfully, but these errors were encountered: