diff --git a/README.md b/README.md index 1b91239..50263c3 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,7 @@ ChatGPT mode works the following way: You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings: ```json -{ +{ "settings": { "ai_assistant": { "cache_prefix": "your_project_name" @@ -85,7 +85,7 @@ You can add a few things to your request: To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line). -To send the whole file(s) in advance to request you should `super+button1` on them to make all tabs of them to become visible in a **single view group** and then run `[New Message|Chat Model] with Sheets` command as shown on the screen below. Pay attention, that in given example only `README.md` and `4.0.0.md` will be sent to a server, but not a content of the `AI chat`. +To send the whole file(s) in advance to request you should `super+button1` on them to make all tabs of them to become visible in a **single view group** and then run `[New Message|Chat Model] with Sheets` command as shown on the screen below. Pay attention, that in given example only `README.md` and `4.0.0.md` will be sent to a server, but not a content of the `AI chat`. ![](static/media/file_selection_example.png) @@ -105,7 +105,7 @@ It expects an absolute path to image to be selected in a buffer or stored in cli #### Phantom use case -Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view. +Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view. 1. You can set `"prompt_mode": "phantom"` for AI assistant in its settings. 2. [optional] Select some text to pass in context in to manipulate with. @@ -133,10 +133,13 @@ Phantom is the overlay UI placed inline in the editor view (see the picture belo ### Open Source models support (llama.cpp, ollama) -1. Replace `"url"` setting of a given model to point to whatever host you're server running on (e.g.`"http://localhost:8080"`). -2. [Optional] Provide a `"token"` if your provider required one. +1. Replace `"url"` setting of a given model to point to whatever host you're server running on (e.g.`"http://localhost:8080"`). +2. ~~[Optional] Provide a `"token"` if your provider required one.~~ **Temporarily mandatory, see warning below.** 3. Tweak `"chat_model"` to a model of your choice and you're set. +> [!WARNING] +> Due to a known issue, a token value of 10 or more characters is currently required even for unsecured servers. [More details here.](#workaround-for-64) + > [!NOTE] > You can set both `url` and `token` either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session. @@ -150,6 +153,9 @@ The OpenAI Completion plugin has a settings file where you can set your OpenAI A } ``` +> [!WARNING] +> Due to a known issue, a token value of 10 or more characters is currently required even for unsecured servers. [More details here.](#workaround-for-64) + ### Advertisement disabling To disable advertisement you have to add `"advertisement": false` line into an assistant setting where you wish it to be disabled. @@ -158,7 +164,7 @@ To disable advertisement you have to add `"advertisement": false` line into an a You can bind keys for a given plugin command in `Preferences` -> `Package Settings` -> `OpenAI` -> `Key Bindings`. For example you can bind "New Message" including active tabs as context command like this: -```json +```javascript { "keys": [ "super+k", "super+'" ], "command": "openai", // or "openai_panel" @@ -177,7 +183,7 @@ It just works. You can setup it up by overriding the proxy property in the `OpenAI completion` settings like follow: -```json +```javascript "proxy": { "address": "127.0.0.1", // required "port": 9898, // required @@ -186,6 +192,17 @@ You can setup it up by overriding the proxy property in the `OpenAI completion` } ``` +## Known bugs + +### Workaround for [#64](https://github.com/yaroslavyaroslav/OpenAI-sublime-text/issues/64) + +> [!IMPORTANT] +> If any of your assistants are configured with a `"token"` of less than 10 characters, they will always fail to establish a connection, even if the server does not require authentication at all. Instead, the plugin will produce the following error message: +> +> **"No API token provided, you have to set the OpenAI token into the settings to make things work."** + +[Luckily you'll find a quick and easy workaround here until a fix can be released!](https://github.com/yaroslavyaroslav/OpenAI-sublime-text/issues/64#issuecomment-2410170986) + ## Disclaimers > [!WARNING] @@ -197,4 +214,4 @@ You can setup it up by overriding the proxy property in the `OpenAI completion` [stars]: https://github.com/yaroslavyaroslav/OpenAI-sublime-text/stargazers [img-stars]: static/media/star-on-github.svg [downloads]: https://packagecontrol.io/packages/OpenAI%20completion -[img-downloads]: https://img.shields.io/packagecontrol/dt/OpenAI%2520completion.svg \ No newline at end of file +[img-downloads]: https://img.shields.io/packagecontrol/dt/OpenAI%2520completion.svg diff --git a/openAI.sublime-settings b/openAI.sublime-settings index 6d85e36..ca50eee 100644 --- a/openAI.sublime-settings +++ b/openAI.sublime-settings @@ -6,6 +6,7 @@ "url": "https://api.openai.com", // Your openAI token + // Token can be anything so long as it is at least 10 characters long. "token": "", // Apply Sublime Text markdown syntax highlight to OpenAI completion output panel text.