You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please use the 👍 reaction to show that you are interested into the same feature.
Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
Subscribe to receive notifications on status change and new comments.
Is your feature request related to a problem? Please describe.
I am currently experimenting with self-hosting a quantized version of the OpenAssistant LLM with the API provided by https://github.com/oobabooga/text-generation-webui. To achieve full self-hosting chat functionality, I have developed a simple Python script to send API requests when a Nextcloud Talk command is invoked in the chat, which passes the rest of the message as a prompt to the LLM. This can of course be extended to other APIs, such as those offered by HuggingFace and OpenAI.
Describe the solution you'd like
I'd like to be able to avoid typing the command every time I want to send a message to the LLM. My quick proposal:
/<command> enable: enters the mode where everything that is typed in the chat after that, is processed by the command <command>
/<command> disable: exits the "chat mode" and stops sending the contents of the messages to the command script.
Describe alternatives you've considered
Now I have to type the command in every message. It works, but it's not optimal and would not work for inexperienced users.
Additional context
I'd also love if Talk supported MarkDown, it would be very useful when asking LLMs to produce code or formatted text (not opening another issue as there is #1027 already) 🤖
The text was updated successfully, but these errors were encountered:
How to use GitHub
Is your feature request related to a problem? Please describe.
I am currently experimenting with self-hosting a quantized version of the OpenAssistant LLM with the API provided by https://github.com/oobabooga/text-generation-webui. To achieve full self-hosting chat functionality, I have developed a simple Python script to send API requests when a Nextcloud Talk command is invoked in the chat, which passes the rest of the message as a prompt to the LLM. This can of course be extended to other APIs, such as those offered by HuggingFace and OpenAI.
Describe the solution you'd like
I'd like to be able to avoid typing the command every time I want to send a message to the LLM. My quick proposal:
/<command> enable
: enters the mode where everything that is typed in the chat after that, is processed by the command<command>
/<command> disable
: exits the "chat mode" and stops sending the contents of the messages to the command script.Describe alternatives you've considered
Now I have to type the command in every message. It works, but it's not optimal and would not work for inexperienced users.
Additional context
I'd also love if Talk supported MarkDown, it would be very useful when asking LLMs to produce code or formatted text (not opening another issue as there is #1027 already) 🤖
The text was updated successfully, but these errors were encountered: