-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Language model access API #206265
Comments
With #206358 there will be no more language model access object. Instead all requests are made directly via |
* makes ExtHostAuth and ExtHostLM injectable * (hack) makes silents auth requests for all extension upon LM registration #206265
Hi @jrieken Is there any plans to add a Thanks! |
Yeah, that exists. It is the |
@jrieken I assume you are referring to the So if I use |
No, I mean the proposal that I have mentioned |
FYI that I made the following changes to the
These changes allow for a better implementation of the providing-end ( // select a model by vendor and family
await vscode.lm.selectChatModels({ vendor: 'copilot', family: 'gpt-3.5-turbo' })
// select all models
await vscode.lm.selectChatModels()
// hardcode an identifier (previous usage model, NOT recommended because brittle)
await vscode.lm.selectChatModels({ id: 'someHardcodedModelId' }) To restate, the export interface LanguageModelChatSelector {
vendor?: string;
family?: string;
version?: string;
id?: string;
} Note that
|
I see mention of moving the System prompt to a separate Proposal, can that be linked to? I am not finding it and we use it in our extension. Also is this section of the guide still accurate? https://code.visualstudio.com/api/extension-guides/language-model#prompt-crafting It's not showing those commands as imports anymore and from my reading of the types it looks like it should be closer to:
|
This is place of all API proposals: https://github.com/microsoft/vscode/tree/main/src/vscode-dts
Likely outdated |
Where should I look for the additional logs? |
@bwateratmsft I did find something after @roblourens running into a similar issues. Can you confirm that a later/second invocation yields models and/or that the |
I found some log lines for selection but nothing for registration:
We end up doing language model selection several times and each time it returns I even tried doing the following and still there were no log lines about registration:
However, when using VSCode Desktop as the frontend, I do see log lines about language model registration:
|
Thanks for the update. That would really mean the language model API doesn't register. Can you check the chat log (Output > GH Copilot Chat) for messages starting with |
I don't see any messages like that from either browser or desktop frontend on the Codespace. However, on the browser session there was this message which did not appear on the desktop session--might be spurious but I'm not sure:
|
We have finalized the Language Model and Chat Participant API 🚀 Thus, you can now publish your extensions to the VS Marketplace. This API is finalized in VS Code Insiders, and will be finalized in VS Code Stable in July. Thus we suggest that you use an engines: ^1.90.0 property in your package.json, and VS Code Stable will gracefully handle your extensions and ignore your usage of Language Model and Chat API until the API gets finalized in Stable in July. Full docs https://code.visualstudio.com/api/extension-guides/language-model We have decided to only finalise the We are excited to see what you build ✨ |
@isidorn I am still wondering if there will be a For now, it can be shimmed by wrapping |
Still a proposal and not yet planned for finalisation |
This issue is about allowing extensions to use language model that are contributed by other extensions. The API proposal is https://github.com/microsoft/vscode/blob/main/src/vscode-dts/vscode.proposed.languageModels.d.ts and its scope is to allow to use the chat feature of language models.
Remaining finalisation todos
consider moving token counting into its own proposal (some model don't do token counting e.g anthropic or ollama)LanguageModelChatResponse
, e.g have a fake reply message with role like others (ollama, open ai, anthropic)LanguageModelChatResponse#stream
totext
and I will continue to explore a stream of structured objects. That would be additive totext
and plugged in as its foundation.modelResult: Thenable<{ [name: string]: any }>
The text was updated successfully, but these errors were encountered: