Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: configure dynamic providers via .env #1108

Merged
merged 8 commits into from
Jan 17, 2025

Conversation

mrsimpson
Copy link
Collaborator

@mrsimpson mrsimpson commented Jan 16, 2025

Motivation

Up to now, some providers couldn't be configured via the environment properly. The list of models remained empty

What this PR contains

Previously, the list of models was retrieved from the front-end. If the list wasn't hard coded (like for many providers), the system tried to retrieve the models via the /models route of the provider.
As the environment configuration which includes the API keys is intentionally not shared with the client, this request couldn't complete and failed silently on the client.
This affected openai like and openrouter once they required api keys.

This PR implements a more robust and flexible approach to retrieving model lists by moving the initialization logic to the backend, enabling dynamic model discovery and configuration across different providers.

Key Changes

  1. Model List Retrieval
    Replaced static MODEL_LIST with dynamic backend-driven model fetching
    Created a new /api/models endpoint to serve model information
    Implemented useModels hook for frontend model access
  2. Provider and Model Management
    Refactored model initialization to use LLMManager
    Added more flexible provider and model configuration
    Improved handling of API keys and provider settings
    Technical Details
    LLMManager

Key implementation details:

  • Enhanced updateModelList method to handle dynamic model discovery
  • Improved provider configuration detection
  • Added method to retrieve providers and default provider

API Endpoints

/api/models now returns:

  • Complete model list
  • Available providers
  • Default provider configuration

Related

#1035

# Conflicts:
#	app/components/chat/BaseChat.tsx
@mrsimpson mrsimpson requested a review from thecodacus January 16, 2025 09:23
@thecodacus
Copy link
Collaborator

I have added my comments, I didn't want to add another api endpoint but I guess thats the only way to have both apikeys from UI and from env since we should not expose apikeys set on env to UI.

the only change i think is needed is passing the UI data from cookies so that it works for every scenarios env/UI

@thecodacus
Copy link
Collaborator

thecodacus commented Jan 16, 2025

also there is a section when apiKey changes in UI we update the models for that provider.
BaseChat.ts file
image

but since we are using endpoint to get the models list. shall we change this also?

@mrsimpson
Copy link
Collaborator Author

Thanks for the catch with the override, @thecodacus !

I re-enabled the provisioning of apiKeys from the UI.

In the video, you can see the api-key being loaded from the env, then overridden with an invalid api-key (the models-list becomes blank as earlier), then override the api key with the valid one from the UI.
https://github.com/user-attachments/assets/ed28b841-7f84-45fb-8602-163b5b786753

Hope this is fine now.

@thecodacus
Copy link
Collaborator

In the video, you can see the api-key being loaded from the env, then overridden with an invalid api-key (the models-list becomes blank as earlier), then override the api key with the valid one from the UI.
https://github.com/user-attachments/assets/ed28b841-7f84-45fb-8602-163b5b786753

but i believe in this case its not using the /models endpoint but using the browser to call the providers. can you confirm that by checking the browser console ?

@mrsimpson
Copy link
Collaborator Author

@thecodacus

I now transport all the apikeys and settings via cookies. It still makes me shiver to have an unencrypted textfile with credentials transported as plain text, but ... maybe I have been too corporate all my dev life ;)

you asked

but i believe in this case its not using the /models endpoint but using the browser to call the providers. can you confirm that by checking the browser console ?

I verified manually on the dev tools: When changing the apiKey on the ui, a new request to /models is performed.

@mrsimpson mrsimpson requested a review from thecodacus January 17, 2025 14:25
@thecodacus
Copy link
Collaborator

all looks good to me. I will just run it once then can merge it

@mrsimpson
Copy link
Collaborator Author

mrsimpson commented Jan 17, 2025

@thecodacus please let me merge it if it looks ok for you. I want to make sure I pushed the latest changes, but I'm afk now

Edit: all good, it was just the mobile GitHub app not refreshing 🤦

@mrsimpson mrsimpson added the epic:llm Model interaction label Jan 17, 2025
@thecodacus thecodacus merged commit e196442 into stackblitz-labs:main Jan 17, 2025
5 checks passed
@thecodacus thecodacus added this to the v0.0.6 milestone Jan 17, 2025
timoa pushed a commit to timoa/bolt.diy that referenced this pull request Jan 21, 2025
* Use backend API route to fetch dynamic models

# Conflicts:
#	app/components/chat/BaseChat.tsx

* Override ApiKeys if provided in frontend

* Remove obsolete artifact

* Transport api keys from client to server in header

* Cache static provider information

* Restore reading provider settings from cookie

* Reload only a single provider on api key change

* Transport apiKeys and providerSettings via cookies.

While doing this, introduce a simple helper function for cookies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
epic:llm Model interaction
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants