-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Ollama as a high-level service #510
Conversation
Please be aware I have no idea how the IDE plugin API works, there's bound to be issues 😆 |
Maybe I missed some discussion here, but I thought @carlrobertoh didnt want to support Ollama as a high-level service? |
We did a little negotiating 😛 |
Since this is a popular request and Ollama doesn't support an API for OpenAI-compatible text completions, I've decided to make an exception. However, I'd still like to keep the others as they are and provide better documentation on how to configure them. 🙂 |
Good timing, I was actually working on adding But instead I now opened a PR for supporting llama.cpp's |
Wow that made progress way faster than I expected, thanks @PhilKes! I'll hold off on this for a bit and see if we can get that API in for the first release, which would effectively solve code completions |
Nice! In the meantime, we could switch the llama.cpp completions to the |
Ollama as a high-level service support /v1/completions. keep on, |
# Conflicts: # src/main/kotlin/ee/carlrobert/codegpt/actions/CodeCompletionFeatureToggleActions.kt # src/main/kotlin/ee/carlrobert/codegpt/codecompletions/CodeCompletionRequestFactory.kt
There were some bugs with immutable settings
Hey @boswelja, just wanted to confirm that I got your PR at last state dc32216 working with master branch of https://github.com/carlrobertoh/llm-client pushed to Few thoughts:
Hope this motivates you to pushing this PR further! Happy to test new changes and work out Ollama support |
Thanks @artem-zinnatullin! I'm aware of potential issues with toggling code completion, I was beaten to the punch by the custom OpenAI service completion, which implements this slightly different, so I'm halfway through refactoring to match that. While we wait for ollama/ollama#3907, I'll try split this into smaller PRs so that it's less to review all at once :) |
Yes that's me |
@boswelja I think we shouldnt rely on It would be really nice not having to bother with Infill Prompt Templates in the CodeGPT Plugin itself, but I think the But if not, I would actually propose to rollback #513 and also do not rely on it for the Ollama service implementation aswell. |
Fair enough, we can move forward sticking with |
Huh, that's probably the reason why I rolled back the /infill API in the first place, altho I never actually investigated why some of the models weren't working as expected. @PhilKes Let's revert the last change :) @boswelja is the PR ready for review? I might push some changes on the fly, or perhaps merge it as is, since I'm planning on integrating another new service, which might cause some merge conflicts. |
I was about to say "no, I've got a couple of smaller PRs that should go in first" but looks like they're merged now! I'll resolve conflicts and do another once-over, I think the only other thing I wanted input on is #510 (comment) |
# Conflicts: # gradle/libs.versions.toml # src/main/java/ee/carlrobert/codegpt/completions/CompletionRequestService.java # src/main/kotlin/ee/carlrobert/codegpt/actions/CodeCompletionFeatureToggleActions.kt # src/main/kotlin/ee/carlrobert/codegpt/codecompletions/CodeGPTInlineCompletionProvider.kt
Current issues:
Not really sure how to fix that first one |
I made a few minor changes, including fixing the dropdown refresh issues. Edit: Will revert the removal
I don't think it's an issue at the moment. Let's keep it. |
b4702bc
to
0f59c8b
Compare
Everything seems to be working more or less; code completions still need to be improved, but other than that, it seems good. I'll try to provide better documentation on how to set up everything soon as well. Also, if something pops up, then I'll fix it on the fly. Furthermore, you can expect the feature to be released sometime early next week, hopefully even earlier. A big thank you to everyone for your help and support! ❤️ |
* Initial implementation of Ollama as a service * Fix model selector in tool window * Enable image attachment * Rewrite OllamaSettingsForm in Kt * Create OllamaInlineCompletionModel and use it for building completion template * Add support for blocking code completion on models that we don't know support it * Allow disabling code completion settings * Disable code completion settings when an unsupported model is entered * Track FIM template in settings as a derived state * Update llm-client * Initial implementation of model combo box * Add Ollama icon and display models as list * Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin * Add refresh models button * Distinguish between empty/needs refresh/loading * Avoid storing any model if the combo box is empty * Fix icon size * Back to mutable settings There were some bugs with immutable settings * Store available models in settings state * Expose available models in model dropdown * Add dark icon * Cleanups for CompletionRequestProvider * Fix checkstyle issues * refactor: migrate to SimplePersistentStateComponent * fix: add code completion stop tokens * fix: display only one item in the model popup action group * fix: add back multi model selection --------- Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
Implementing Ollama as a high-level service type. This has a few advantages:
Right now, the above benefits have translated into:
Currently blocked by:
Add /api/infill for fill-in-the-middle ollama/ollama#3907Delays with implementation on ollama side, let's stick with what we've got for now.Screenshots: