-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error "Something went wrong" when sending code completion request to Ollama instance #684
Comments
The problem currently prevents any use of this plugin and downgrades no longer work due to the new Intellij version. |
It looks like the host can't be overridden since the last version. So, if your Ollama server is running on a port other than 11434, the connection will fail. I will fix this in the next release. As a workaround, please run the server on the default port (11434). |
This can not be the reason. In my case the port is already the default port. I have currently http://llm.local.net:11434 |
The same issue is related to your use case as well. 6b7e26 |
Ok, my instance is at 443, so this can be possible. I use Nginx as my proxy between Ollama instance and PhpStorm. I'll wait for new release and check out, if it works :) |
I've just tried installing CodeGPT, and came across this issue. My ollama instance is hosted remotely and I was getting the "something went wrong" message. I've used an ssh tunnel (from localhost:11434 to remote:port) to resolve for now. |
This issue should be fixed in the latest version (2.11.1). Please reopen the ticket if it is still reproducible. |
I can confirm, everything works after update to 2.11.1 :D |
I'm not sure if this is related to the fix or not, but has anyone else noticed that Ollama ignores the timeout setting in Advanced Settings? |
What happened?
After update to new version of plugin in PhpStorm, when I write something in editor with code completion enabled, then error is thrown. Same goes for Chat window.
Only exception is when fetching models list from instance -> request is sent and valid list of available models is fetched.
Relevant log output or stack trace
Steps to reproduce
CodeGPT version
2.11.0-241.1
Operating System
Linux
The text was updated successfully, but these errors were encountered: