Generate and brainstorm ideas while creating your notes using Large Language Models (LLMs) such as OpenAI's "gpt-3.5-turbo" and "gpt-4" for Obsidian.
- Interact with self-hosted Large Language Models (LLMs): Use the REST API URL provided to interact with self-hosted Large Language Models (LLMs) using Ollama or LocalAI.
- Chat from anywhere in Obsidian: Chat with your bot from anywhere within Obsidian.
- Chat with current note: Use your chatbot to reference and engage within your current note.
- Chatbot responds in Markdown: Receive formatted responses in Markdown for consistency.
- Customizable bot name: Personalize the chatbot's name.
- System role prompt: Configure the chatbot to prompt for user roles before responding to messages.
- System theme color accents: Seamlessly matches the chatbot's interface with your system's color scheme.
- Save current chat history as markdown: Use the
/save
command in chat to save current conversation.
To use this plugin, you'll need an OpenAI account with API access. If you don't have an account yet, you can sign up for one on the OpenAI website.
Additionally, if you want to interact with self-hosted Large Language Models (LLMs) using Ollama or LocalAI, you will need to have the self-hosted API set up and running. You can follow the instructions provided by the self-hosted API provider to get it up and running. Once you have the REST API URL for your self-hosted API, you can use it with this plugin to interact with your models.
Please see instructions to setup Ollama with Obsidian.
Explore some models at GPT4ALL under the "Model Explorer" section or Ollama's Library.
Three methods:
Obsidian Community plugins (Recommended):
- Search for "BMO Chatbot" in the Obsidian Community plugins.
- Enable "BMO Chatbot" in the settings.
To activate the plugin from this repo:
- Navigate to the plugin's folder in your terminal.
- Run
npm install
to install any necessary dependencies for the plugin. - Once the dependencies have been installed, run
npm run build
to build the plugin. - Once the plugin has been built, it should be ready to activate.
Install using Beta Reviewers Auto-update Tester (BRAT) - Quick guide for using BRAT
- Search for "Obsidian42 - BRAT" in the Obsidian Community plugins.
- Open the command palette and run the command
BRAT: Add a beta plugin for testing
(If you want the plugin version to be frozen, use the commandBRAT: Add a beta plugin with frozen version based on a release tag
.) - Paste "https://github.com/longy2k/obsidian-bmo-chatbot".
- Click on "Add Plugin".
- After BRAT confirms the installation, in Settings go to the Community plugins tab.
- Refresh the list of plugins.
- Find the beta plugin you just installed and enable it.
To start using the plugin, enable it in your settings menu and enter your OpenAI API key. After completing these steps, you can access the bot panel by clicking on the bot icon in the left sidebar.
/help
- Show help commands./list
- List models./model
- Change model./model 1
or/model "gpt-3.5-turbo"
/model 2
or/model "gpt-3.5-turbo-1106"
- ...
/system
- Change system prompt./system "WRITE IN ALL CAPS!"
/maxtokens [VALUE]
- Set max tokens./temp [VALUE]
- Change temperature range from 0 to 1./ref on | off
- Turn on or off reference current note./append
- Append current chat history to current active note./save
- Save current chat history to a note./clear
or/c
- Clear chat history./stop
or/s
- Stop fetching response.
- OpenAI
- gpt-3.5-turbo
- gpt-3.5-turbo-1106
- Newest gpt-3.5-turbo model with a context window of 16,385 tokens, replacing gpt-3.5-turbo-16k.
- Same pricing as gpt-3.5-turbo.
- gpt-4 (Context window: 8,192 tokens)
- gpt-4-1106-preview (Context window: 128,000 tokens)
- Any self-hosted models using Ollama.
- See instructions to setup Ollama with Obsidian.
I would like to continue supporting Anthropic's models, but I no longer have access to the API.
I'm currently prioritizing Ollama over LocalAI due to its simplicity. I may drop LocalAI when Ollama becomes available on Windows.
- Anthropic
- claude-instant-1.2
- claude-2.0
- claude-2.1
- Any self-hosted models using LocalAI
"BMO" is a tag name for this project, inspired by the character BMO from the animated TV show "Adventure Time."
Any ideas or support is highly appreciated :)
If you have any bugs, improvements, or questions please create an issue or discussion!