Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial system prompt in ChatHandler and completion #28

Merged
merged 4 commits into from
Feb 4, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 20 additions & 3 deletions src/chat-handler.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,12 @@ import type { BaseChatModel } from '@langchain/core/language_models/chat_models'
import {
AIMessage,
HumanMessage,
mergeMessageRuns
mergeMessageRuns,
SystemMessage
} from '@langchain/core/messages';
import { UUID } from '@lumino/coreutils';
import { getErrorMessage } from './llm-models';
import { chatSystemPrompt } from './provider';
import { IAIProvider } from './token';

export type ConnectionMessage = {
Expand All @@ -28,15 +30,28 @@ export class ChatHandler extends ChatModel {
constructor(options: ChatHandler.IOptions) {
super(options);
this._aiProvider = options.aiProvider;
this._prompt = chatSystemPrompt(this._aiProvider.name);

this._aiProvider.modelChange.connect(() => {
this._errorMessage = this._aiProvider.chatError;
this._prompt = chatSystemPrompt(this._aiProvider.name);
});
}

get provider(): BaseChatModel | null {
return this._aiProvider.chatModel;
}

/**
* Getter and setter for the initial prompt.
*/
get prompt(): string {
return this._prompt;
}
set prompt(value: string) {
this._prompt = value;
}

async sendMessage(message: INewMessage): Promise<boolean> {
message.id = UUID.uuid4();
const msg: IChatMessage = {
Expand All @@ -62,8 +77,9 @@ export class ChatHandler extends ChatModel {

this._history.messages.push(msg);

const messages = mergeMessageRuns(
this._history.messages.map(msg => {
const messages = mergeMessageRuns([new SystemMessage(this._prompt)]);
messages.push(
...this._history.messages.map(msg => {
if (msg.sender.username === 'User') {
return new HumanMessage(msg.body);
}
Expand Down Expand Up @@ -117,6 +133,7 @@ export class ChatHandler extends ChatModel {
}

private _aiProvider: IAIProvider;
private _prompt: string;
private _errorMessage: string = '';
private _history: IChatHistory = { messages: [] };
private _defaultErrorMessage = 'AI provider not configured';
Expand Down
16 changes: 13 additions & 3 deletions src/llm-models/anthropic-completer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { AIMessage, SystemMessage } from '@langchain/core/messages';

import { BaseCompleter, IBaseCompleter } from './base-completer';
import { COMPLETION_SYSTEM_PROMPT } from '../provider';

export class AnthropicCompleter implements IBaseCompleter {
constructor(options: BaseCompleter.IOptions) {
Expand All @@ -17,6 +18,16 @@ export class AnthropicCompleter implements IBaseCompleter {
return this._anthropicProvider;
}

/**
* Getter and setter for the initial prompt.
*/
get prompt(): string {
return this._prompt;
}
set prompt(value: string) {
this._prompt = value;
}

async fetch(
request: CompletionHandler.IRequest,
context: IInlineCompletionContext
Expand All @@ -28,9 +39,7 @@ export class AnthropicCompleter implements IBaseCompleter {
const trimmedPrompt = prompt.trim();

const messages = [
new SystemMessage(
'You are a code-completion AI completing the following code from a Jupyter Notebook cell.'
),
new SystemMessage(this._prompt),
new AIMessage(trimmedPrompt)
];

Expand Down Expand Up @@ -62,4 +71,5 @@ export class AnthropicCompleter implements IBaseCompleter {
}

private _anthropicProvider: ChatAnthropic;
private _prompt: string = COMPLETION_SYSTEM_PROMPT;
}
5 changes: 5 additions & 0 deletions src/llm-models/base-completer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,11 @@ export interface IBaseCompleter {
*/
provider: BaseLanguageModel;

/**
* The completion prompt.
*/
prompt: string;

/**
* The function to fetch a new completion.
*/
Expand Down
12 changes: 12 additions & 0 deletions src/llm-models/codestral-completer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import { Throttler } from '@lumino/polling';
import { CompletionRequest } from '@mistralai/mistralai';

import { BaseCompleter, IBaseCompleter } from './base-completer';
import { COMPLETION_SYSTEM_PROMPT } from '../provider';

/**
* The Mistral API has a rate limit of 1 request per second
Expand Down Expand Up @@ -66,6 +67,16 @@ export class CodestralCompleter implements IBaseCompleter {
return this._mistralProvider;
}

/**
* Getter and setter for the initial prompt.
*/
get prompt(): string {
return this._prompt;
}
set prompt(value: string) {
this._prompt = value;
}

set requestCompletion(value: () => void) {
this._requestCompletion = value;
}
Expand Down Expand Up @@ -109,5 +120,6 @@ export class CodestralCompleter implements IBaseCompleter {
private _requestCompletion?: () => void;
private _throttler: Throttler;
private _mistralProvider: MistralAI;
private _prompt: string = COMPLETION_SYSTEM_PROMPT;
private _currentData: CompletionRequest | null = null;
}
24 changes: 24 additions & 0 deletions src/provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,30 @@ import { CompletionProvider } from './completion-provider';
import { getChatModel, IBaseCompleter } from './llm-models';
import { IAIProvider } from './token';

export const chatSystemPrompt = (provider_name: string) => `
Copy link
Member

@jtpio jtpio Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this function take a single option, instead of raw parameters like provider_name?

Wondering if this could make it easier to extend later.

You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
You are not a language model, but rather an application built on a foundation model from ${provider_name}.
You are talkative and you provide lots of specific details from the foundation model's context.
You may use Markdown to format your response.
If your response includes code, they must be enclosed in Markdown fenced code blocks (with triple backticks before and after).
If your response includes mathematical notation, they must be expressed in LaTeX markup and enclosed in LaTeX delimiters.
All dollar quantities (of USD) must be formatted in LaTeX, with the \`$\` symbol escaped by a single backslash \`\\\`.
- Example prompt: \`If I have \\\\$100 and spend \\\\$20, how much money do I have left?\`
- **Correct** response: \`You have \\(\\$80\\) remaining.\`
- **Incorrect** response: \`You have $80 remaining.\`
If you do not know the answer to a question, answer truthfully by responding that you do not know.
The following is a friendly conversation between you and a human.
`;

export const COMPLETION_SYSTEM_PROMPT = `
You are an application built to provide helpful code completion suggestions.
You should only produce code. Keep comments to minimum, use the
programming language comment syntax. Produce clean code.
The code is written in JupyterLab, a data analysis and code development
environment which can execute code extended with additional syntax for
interactive features, such as magics.
`;

export class AIProvider implements IAIProvider {
constructor(options: AIProvider.IOptions) {
this._completionProvider = new CompletionProvider({
Expand Down
Loading