A Grafana plugin designed to centralize access to LLMs, providing authentication, rate limiting, and more. Installing this plugin will enable various pieces of LLM-based functionality throughout Grafana.
Note: This plugin is experimental, and may change significantly between versions, or be deprecated completely in favor of a different approach based on user feedback.
To install this plugin, use the GF_INSTALL_PLUGINS
environment variable when running Grafana:
GF_INSTALL_PLUGINS=grafana-llm-app
or alternatively install using the Grafana CLI:
grafana cli plugins install grafana-llm-app
The plugin can then be configured either in the UI or using provisioning, as shown below.
To provision this plugin you should set the following environment variable when running Grafana:
OPENAI_API_KEY=sk-...
and use the following provisioning file (e.g. in /etc/grafana/provisioning/plugins/grafana-llm-app
, when running in Docker):
apiVersion: 1
apps:
- type: 'grafana-llm-app'
disabled: false
jsonData:
openAI:
url: https://api.openai.com
secureJsonData:
openAIKey: $OPENAI_API_KEY
To make use of this plugin when adding LLM-based features, you can use the helper functions in the @grafana/experimental
package.
First, add the correct version of @grafana/experimental
to your dependencies in package.json:
{
"dependencies": {
"@grafana/experimental": "1.7.0"
}
}
Then in your components you can use the llm
object from @grafana/experimental
like so:
import React, { useState } from 'react';
import { useAsync } from 'react-use';
import { scan } from 'rxjs/operators';
import { llms } from '@grafana/experimental';
import { PluginPage } from '@grafana/runtime';
import { Button, Input, Spinner } from '@grafana/ui';
const MyComponent = (): JSX.Element => {
const [input, setInput] = React.useState('');
const [message, setMessage] = React.useState('');
const [reply, setReply] = useState('');
const { loading, error } = useAsync(async () => {
const enabled = await llms.openai.enabled();
if (!enabled) {
return false;
}
if (message === '') {
return;
}
// Stream the completions. Each element is the next stream chunk.
const stream = llms.openai.streamChatCompletions({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a cynical assistant.' },
{ role: 'user', content: message },
],
}).pipe(
// Accumulate the stream chunks into a single string.
scan((acc, delta) => acc + delta, '')
);
// Subscribe to the stream and update the state for each returned value.
return stream.subscribe(setReply);
}, [message]);
if (error) {
// TODO: handle errors.
return null;
}
return (
<div>
<Input
value={input}
onChange={(e) => setInput(e.currentTarget.value)}
placeholder="Enter a message"
/>
<br />
<Button type="submit" onClick={() => setMessage(input)}>Submit</Button>
<br />
<div>{loading ? <Spinner /> : reply}</div>
</div>
);
}
-
Update Grafana plugin SDK for Go dependency to the latest minor version:
go get -u github.com/grafana/grafana-plugin-sdk-go go mod tidy
-
Build backend plugin binaries for Linux, Windows and Darwin:
mage -v
-
List all available Mage targets for additional commands:
mage -l
-
Install dependencies
npm run install
-
Build plugin in development mode and run in watch mode
npm run dev
-
Build plugin in production mode
npm run build
-
Run the tests (using Jest)
# Runs the tests and watches for changes, requires git init first npm run test # Exits after running all the tests npm run test:ci
-
Spin up a Grafana instance and run the plugin inside it (using Docker)
npm run server
-
Run the E2E tests (using Cypress)
# Spins up a Grafana instance first that we tests against npm run server # Starts the tests npm run e2e
-
Run the linter
npm run lint # or npm run lint:fix
The LLM example app can be a quick way to test out changes to the LLM plugin.
To use the example app in conjunction with the LLM plugin:
- Clone the llm example app
- Update the following fields in
docker-compose.yaml
in the llm example app
- comment out # GF_INSTALL_PLUGINS: grafana-llm-app
- Add the following volume:
<some-parent-path>/grafana-llm-app/dist:/var/lib/grafana/plugins/grafana-llm-app
- Follow the instructions in the llm example app to run the app
- Bump version in package.json (e.g., 0.2.0 to 0.2.1)
- Add notes to changelog describing changes since last release
- Merge PR for a branch containing those changes into main
- Go to drone here and identify the build corresponding to the merge into main
- Promote to target 'publish'