Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Support local and remote ollama instances #13

Open
gaussandhisgun opened this issue Sep 23, 2024 · 0 comments
Open

[Enhancement] Support local and remote ollama instances #13

gaussandhisgun opened this issue Sep 23, 2024 · 0 comments

Comments

@gaussandhisgun
Copy link

ollama is an app that lets you host your own LLMs, assuming you have the hardware for that. And the hardware requirements are actually pretty generous, i can run gemma2-2b on my 2012 thinkpad (3rd gen intel core i7, 12 gigs of ram, no dgpu). Its also entirely free and open source.

Wouldn't it be cool if you could point your keyboard to it and make your own, personal, self-hosted AI work for you? I know it would, because all the currently available options are sadly unavailable in my area due to, ahem, capitalist bullshit. You should always have a fallback that does not rely on external servers owned by other people, you know what I'm sayin?

ollama API reference is available here. A community-driven wrapper library for Java is available here. And another one. And a third one.

p.s. i did try the promised built-in ollama compatibility with openai completions api, it did not work, probably because that uses the v1 api and you dont

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant