Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support models served via Open AI API #14172

Conversation

sdirix
Copy link
Member

@sdirix sdirix commented Sep 16, 2024

What it does

The Open AI settings now allow to configure additional models and their associated endpoints. This can be used to integrate any custom model running locally or in the Cloud as long as it supports the Open AI API.

fixed #14174

How to test

  1. Serve a model via the Open AI API. The following steps assume you run a model on Runpod with vllm
  2. Open "Open AI" preferences and add a custom model, e.g.
         {
             "model": "mistralai/Mistral-7B-Instruct-v0.2",
             "url": "https://<your-pod-id>-8000.proxy.runpod.net/v1",
             "id": "my-mistral2"
         }
    
  3. Configure and use model

Follow-ups

  • For now we disable "tool" usage for non-Open AI models. We should allow to enable / configure tool usage in some way in the future

Review checklist

Reminder for reviewers

The Open AI settings now allow to configure additional models and their
associated endpoints. This can be used to integrate any custom model
running locally or in the Cloud as long as it supports the Open AI API.
Copy link
Contributor

@planger planger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, works great and code looks good! Thank you!

Co-authored-by: Philip Langer <planger@eclipsesource.com>
@JonasHelming JonasHelming merged commit 5b07819 into eclipse-theia:master Sep 17, 2024
11 checks passed
@sgraband sgraband added this to the 1.54.0 milestone Sep 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Support for VLLM (165)
4 participants