Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support azure deployment name #4930

Merged
merged 6 commits into from
Jul 6, 2024
Merged

Conversation

lloydzhou
Copy link
Contributor

@lloydzhou lloydzhou commented Jul 5, 2024

Summary by CodeRabbit

  • New Features

    • Introduced Azure deployment handling for authentication and API requests.
    • Added support for providerName and displayName in model configurations.
  • Bug Fixes

    • Improved logic for determining Azure context and handling requests accordingly.
  • Refactor

    • Unified API handling for Azure and OpenAI models.
    • Updated logic to use providerName for configuration and model selection.
  • Chores

    • Added a new proxy route for Azure deployments in configuration.

Copy link

vercel bot commented Jul 5, 2024

@lloydzhou is attempting to deploy a commit to the NextChat Team on Vercel.

A member of the Team first needs to authorize it.

Copy link

coderabbitai bot commented Jul 5, 2024

Walkthrough

This update refines how the system handles Azure deployments, streamlining authentication and URL construction based on request paths. It introduces Azure-specific request handling, augments model configurations with provider names, and consolidates model API usage for both Azure and OpenAI. These enhancements ensure a smooth, integrated experience for handling various AI model providers within the application's ecosystem.

Changes

File Path Change Summary
app/api/auth.ts Adjusted condition to check request URL for handling Azure deployments instead of relying solely on server configuration.
app/api/common.ts Refactored Azure deployment handling to check request URL, updating authentication logic and URL construction.
app/api/azure/[...path]/route.ts Added a new route handling file for Azure deployments, incorporating authentication and request processing for OpenAI.
app/client/api.ts Extended LLMConfig and LLMModel interfaces with provider fields, updated getHeaders to handle provider-specific logic.
app/client/platforms/openai.ts Unified LLMApi for Azure and OpenAI models, handling Azure-specific configurations and API paths.
app/components/chat.tsx Incorporated ServiceProvider constant, updated model provider handling and configuration logic in the chat component.
app/components/exporter.tsx Updated logic to select ClientApi based on model configuration's provider name.
app/components/home.tsx Modified logic in useLoadData to use providerName instead of model-specific checks.
app/components/model-config.tsx Enhanced UI component to handle model and provider name, including validation and updating logic based on provider.
app/constant.ts Introduced new Azure API path and constants for easier reference in the application.
app/store/access.ts Added DEFAULT_AZURE_URL based on build mode, setting default value for azureUrl.
app/store/chat.ts Updated logic for selecting ClientApi instances based on modelConfig.providerName for clarity and consistency.
app/store/config.ts Added providerName field with "Openai" as default in DEFAULT_CONFIG for better configuration handling.
app/utils/hooks.ts Adjusted useAllModels dependencies to include accessStore.defaultModel.
next.config.mjs Added proxy route configuration for Azure deployments, enhancing request redirection to Azure endpoints.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Auth
    participant Common
    participant AzureRoute
    participant OpenAI

    Client->>Auth: Request authentication
    Auth->>Common: Forward request 
    Common->>AzureRoute: Check if Azure deployment
    AzureRoute-->>Common: Confirm Azure deployment
    Common->>AzureRoute: Construct Azure-specific URL
    AzureRoute->>OpenAI: Request data from OpenAI service
    OpenAI-->>AzureRoute: Send response to AzureRoute
    AzureRoute-->>Common: Return response
    Common-->>Auth: Return authentication response
    Auth-->>Client: Send response to client
Loading

Poem

In the code where algorithms thrive,
A shift for Azure now arrives.
Paths and models unified,
Through streamlined routes we glide.
For OpenAI and Azure's flight,
Together, they shine bright! 🌐✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lloydzhou lloydzhou changed the title Feature azure support azure deployment name Jul 5, 2024
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between c4a6c93 and 6b1b530.

Files selected for processing (14)
  • app/api/auth.ts (1 hunks)
  • app/api/common.ts (2 hunks)
  • app/client/api.ts (5 hunks)
  • app/client/platforms/openai.ts (5 hunks)
  • app/components/chat.tsx (4 hunks)
  • app/components/exporter.tsx (2 hunks)
  • app/components/home.tsx (3 hunks)
  • app/components/model-config.tsx (3 hunks)
  • app/constant.ts (3 hunks)
  • app/store/access.ts (2 hunks)
  • app/store/chat.ts (4 hunks)
  • app/store/config.ts (2 hunks)
  • app/utils/hooks.ts (1 hunks)
  • next.config.mjs (1 hunks)
Additional context used
Biome
app/api/auth.ts

[error] 76-76: Useless case clause.

because the default clause is present:

Unsafe fix: Remove the useless case.

(lint/complexity/noUselessSwitchCase)

Additional comments not posted (28)
app/utils/hooks.ts (1)

14-19: LGTM!

The useMemo dependency array correctly includes all the relevant variables.

next.config.mjs (1)

72-76: Ensure correct Azure API URL construction.

The Azure API URL construction looks correct, but verify the placeholders {resource_name} and {deploy_name} are properly replaced in actual usage.

app/api/auth.ts (1)

78-78: Remove the redundant case clause.

The case clause for ModelProvider.GPT is redundant due to the default clause being present.

-      case ModelProvider.GPT:

Likely invalid or redundant comment.

app/store/access.ts (2)

36-38: LGTM!

The default Azure API version is correctly set.


20-23: Verify Azure URL configuration.

Ensure that the Azure URL configuration correctly handles the {resource_name} placeholder in actual usage.

app/store/config.ts (1)

8-8: LGTM! But verify the usage of providerName in the codebase.

The addition of the providerName field to the modelConfig object is appropriate for handling multiple service providers.

However, ensure that all references to modelConfig are updated to handle providerName.

Also applies to: 52-52

Verification successful

LGTM! The providerName field is correctly handled throughout the codebase.

The addition of the providerName field to the modelConfig object is consistent and appropriately integrated into the codebase.

  • The field is used in conditional checks and assignments.
  • It is referenced in multiple files, ensuring comprehensive integration.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `providerName` in the codebase.

# Test: Search for the usage of `modelConfig` to ensure `providerName` is handled correctly.
rg --type-add 'ts' --type ts 'modelConfig'

Length of output: 100


Script:

#!/bin/bash
# Description: Verify the usage of `providerName` in the codebase.

# Test: Search for the usage of `modelConfig` to ensure `providerName` is handled correctly.
rg --type-add 'ts:*.ts' 'modelConfig'

Length of output: 12120

app/api/common.ts (2)

60-66: LGTM! But verify the correctness of the constructed paths and URLs.

The logic for constructing the Azure-specific path and base URL is appropriate for handling Azure deployment requests.

However, ensure that the constructed paths and URLs are correct and there are no edge cases that need to be handled.


16-17: LGTM! But verify the sufficiency of the isAzure check.

The addition of the isAzure variable and its usage in the authentication logic is appropriate for handling Azure-specific requests.

However, ensure that the isAzure check is sufficient and there are no edge cases that need to be handled.

Also applies to: 20-20

app/client/api.ts (3)

33-33: LGTM! But verify the usage of providerName in the codebase.

The addition of the providerName field to the LLMConfig interface is appropriate for handling multiple service providers.

However, ensure that all references to LLMConfig are updated to handle providerName.

Verification successful

Verified the usage of providerName in the codebase.

The addition of the providerName field to the LLMConfig interface is consistently used across the codebase, ensuring proper handling of multiple service providers.

  • app/client/api.ts
  • app/constant.ts
  • app/utils/model.ts
  • app/store/chat.ts
  • app/store/config.ts
  • app/client/platforms/openai.ts
  • app/client/platforms/anthropic.ts
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `providerName` in the codebase.

# Test: Search for the usage of `LLMConfig` to ensure `providerName` is handled correctly.
rg --type-add 'ts' --type ts 'LLMConfig'

Length of output: 98


Script:

#!/bin/bash
# Description: Verify the usage of `providerName` in the codebase.

# Test: Search for the usage of `LLMConfig` to ensure `providerName` is handled correctly.
rg 'LLMConfig' --glob '*.ts'

# Test: Search for the usage of `providerName` to ensure it is being used correctly.
rg 'providerName' --glob '*.ts'

Length of output: 1911


Line range hint 165-195:
LGTM! But verify the correctness of the authentication logic.

The modifications to the getHeaders function to handle the new providerName field and adjust the authentication headers based on the provider type are appropriate.

However, ensure that the logic for determining the authentication header and API key is correct and there are no edge cases that need to be handled.

Verification successful

Authentication Logic Verified

The modifications to the getHeaders function handle the new providerName field and adjust the authentication headers based on the provider type. The logic for determining the authentication header and API key appears to be consistent and correctly implemented across different contexts.

  • app/store/access.ts: The headers are correctly applied for fetching configuration.
  • app/client/platforms/google.ts: The headers are correctly applied for Google API requests.
  • app/client/platforms/openai.ts: The headers are correctly applied for OpenAI API requests.
  • app/client/platforms/anthropic.ts: The headers are correctly applied for Anthropic API requests.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the correctness of the authentication logic.

# Test: Search for all occurrences of `getHeaders` to ensure the authentication logic is correct.
rg --type-add 'ts' --type ts 'getHeaders'

Length of output: 99


Script:

#!/bin/bash
# Description: Verify the correctness of the authentication logic.

# Test: Search for all occurrences of `getHeaders` to ensure the authentication logic is correct.
rg 'getHeaders'

Length of output: 830


Script:

#!/bin/bash
# Extract the relevant code snippets from the files where `getHeaders` is used.

# Define the files to search.
files=(
  "app/store/access.ts"
  "app/client/platforms/google.ts"
  "app/client/platforms/openai.ts"
  "app/client/platforms/anthropic.ts"
)

# Loop through the files and extract lines around `getHeaders` usage.
for file in "${files[@]}"; do
  echo "File: $file"
  rg -C 5 'getHeaders' "$file"
  echo
done

Length of output: 4061


58-58: LGTM! But verify the usage of displayName in the codebase.

The addition of the displayName field to the LLMModel interface is appropriate for providing a user-friendly name for models.

However, ensure that all references to LLMModel are updated to handle displayName.

Verification successful

The displayName field is correctly integrated and used across the codebase.

The references in various files such as app/utils/model.ts, app/client/platforms/openai.ts, app/components/model-config.tsx, and app/components/chat.tsx confirm that displayName is appropriately handled.

  • app/utils/model.ts: displayName is set and accessed.
  • app/client/platforms/openai.ts: displayName is referenced.
  • app/components/model-config.tsx and app/components/chat.tsx: displayName is used for display purposes.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `displayName` in the codebase.

# Test: Search for the usage of `LLMModel` to ensure `displayName` is handled correctly.
rg --type-add 'ts' --type ts 'LLMModel'

Length of output: 97


Script:

#!/bin/bash
# Description: Verify the usage of `displayName` in the codebase.

# Test: Search for the usage of `LLMModel` to ensure `displayName` is handled correctly.
rg --type ts 'LLMModel'

Length of output: 1129


Script:

#!/bin/bash
# Description: Verify the usage of `displayName` in the codebase by inspecting the relevant lines.

# Extract and display the relevant lines from the identified files.
rg --type ts 'LLMModel' -A 5 -B 5

Length of output: 7319


Script:

#!/bin/bash
# Description: Verify the usage of the `displayName` field in the codebase.

# Search for the usage of the `displayName` field in the codebase.
rg --type ts 'displayName'

Length of output: 797

app/components/home.tsx (2)

15-15: LGTM! But verify the usage of ServiceProvider in the codebase.

The addition of the ServiceProvider constant in the import statement is appropriate for handling multiple service providers.

However, ensure that all references to ServiceProvider are updated to handle it correctly.

Verification successful

Let's correct the rg command to search for ServiceProvider directly in TypeScript files.


The usage of ServiceProvider is verified and consistent across the codebase.

The import of ServiceProvider in home.tsx aligns with its usage in other parts of the code, confirming that it is handled correctly.

  • app/constant.ts
  • app/store/config.ts
  • app/store/access.ts
  • app/components/home.tsx
  • app/components/model-config.tsx
  • app/components/settings.tsx
  • app/components/exporter.tsx
  • app/components/chat.tsx
  • app/store/chat.ts
  • app/client/api.ts
  • app/client/platforms/openai.ts
  • app/api/common.ts
  • app/api/anthropic/[...path]/route.ts
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `ServiceProvider` in the codebase.

# Test: Search for the usage of `ServiceProvider` to ensure it is handled correctly.
rg --type-add 'ts' --type ts 'ServiceProvider'

Length of output: 104


Script:

#!/bin/bash
# Description: Verify the usage of `ServiceProvider` in the codebase.

# Test: Search for the usage of `ServiceProvider` to ensure it is handled correctly.
rg 'ServiceProvider' --glob '*.ts' --glob '*.tsx'

Length of output: 3756


174-176: LGTM! But verify the correctness of the logic for selecting ClientApi.

The modifications to the logic for selecting ClientApi based on config.modelConfig.providerName are appropriate.

However, ensure that the logic for selecting the ClientApi is correct and there are no edge cases that need to be handled.

app/constant.ts (2)

28-30: LGTM!

The added ApiPath enum values for Azure, OpenAI, and Anthropic are appropriate and consistent with the existing enum structure.


97-98: LGTM!

The added ServiceProvider and ModelProvider enum values, as well as the constants for Azure, Google, and Anthropic API paths and models, are appropriate and consistent with the existing structure.

app/components/model-config.tsx (2)

14-26: LGTM!

The changes to include the provider name in the value and onChange logic for the Select component are logical and ensure the correct handling of the provider name in the configuration.


96-96: LGTM!

The conditional rendering based on providerName is appropriate and ensures that certain settings are only shown for specific providers, improving configurability and user experience.

app/client/platforms/openai.ts (3)

67-67: LGTM!

The added isAzure logic in the path method appropriately determines Azure paths. The error handling for incomplete Azure configuration is useful for guiding the user.


80-91: LGTM!

The conditional path construction logic ensures that the correct base URL is used for Azure and OpenAI paths.


144-169: LGTM!

The added Azure-specific logic in the chat method appropriately handles Azure paths and configurations. The use of collectModelsWithDefaultModel to find the correct model is useful for flexibility.

app/components/exporter.tsx (1)

320-322: LGTM!

The added logic for selecting the ClientApi instance based on the providerName ensures that the correct API is used for different providers.

app/store/chat.ts (4)

Line range hint 452-454: Verify the logic for providerName in model selector.

Ensure that all possible values of providerName are correctly handled when switching models.

Verification successful

Line range hint 452-454: Verified the logic for providerName in model selector.

The code correctly handles all possible values for providerName when switching models, ensuring the correct provider is associated with each model.

  • Valid providerName values: "OpenAI", "Azure", "Google", "Anthropic", "Custom"
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all possible values for `providerName` are handled correctly.

# Test: Search for all possible values of `providerName`. Expect: Only valid values are used.
rg --type-add 'ts' --type-add 'tsx' --type-add 'js' --type-add 'jsx' 'providerName'

Length of output: 135


Script:

#!/bin/bash
# Description: Verify all possible values for `providerName` are handled correctly.

# Test: Search for all possible values of `providerName`. Expect: Only valid values are used.
rg --type-add 'ts:*.ts' --type-add 'tsx:*.tsx' --type-add 'js:*.js' --type-add 'jsx:*.jsx' 'providerName'

Length of output: 3612


Line range hint 486-492: Ensure correct model and provider assignment.

The logic for updating the session with the new model and provider should be verified to ensure correctness.

Verification successful

Ensure correct model and provider assignment.

The logic for updating the session with the new model and provider appears to be correctly implemented in the app/components/chat.tsx file. The session.mask.modelConfig.model and session.mask.modelConfig.providerName are being assigned appropriately.

  • app/components/chat.tsx:
    • session.mask.modelConfig.model = nextModel.name;
    • session.mask.modelConfig.providerName = nextModel?.provider?.providerName as ServiceProvider;
    • session.mask.modelConfig.model = model as ModelType;
    • session.mask.modelConfig.providerName = providerName as ServiceProvider;
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the logic for updating the session with the new model and provider.

# Test: Search for the update logic. Expect: Correct assignment of model and provider.
rg --type-add 'ts' --type-add 'tsx' --type-add 'js' --type-add 'jsx' 'updateCurrentSession'

Length of output: 143


Script:

#!/bin/bash
# Correctly define types and search for the updateCurrentSession function
rg --type-add 'typescript:*.ts' --type-add 'typescriptreact:*.tsx' --type-add 'javascript:*.js' --type-add 'javascriptreact:*.jsx' 'updateCurrentSession'

Length of output: 1777


Script:

#!/bin/bash
# Extract the implementation details of updateCurrentSession from the relevant files
rg --type-add 'typescript:*.ts' --type-add 'typescriptreact:*.tsx' --type-add 'javascript:*.js' --type-add 'javascriptreact:*.jsx' -A 10 'updateCurrentSession' app/store/chat.ts app/components/chat.tsx

Length of output: 13214


367-369: Verify the conditional logic for providerName.

Ensure that all possible values of providerName are correctly handled. Missing values can lead to incorrect ClientApi instantiation.


Line range hint 580-599: Ensure correct handling of model selection and provider assignment.

The logic for handling model selection and provider assignment should be verified to ensure correctness.

Verification successful

Handling of model selection and provider assignment is correctly implemented.

The code correctly handles model selection and provider assignment by setting the appropriate API client based on the providerName and using the model configuration for various parameters and defaults.

  • app/store/chat.ts: Multiple instances show correct usage of modelConfig for setting model properties and checking provider names.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the handling of model selection and provider assignment.

# Test: Search for the handling logic. Expect: Correct handling of model selection and provider assignment.
rg --type-add 'ts' --type-add 'tsx' --type-add 'js' --type-add 'jsx' 'modelConfig'

Length of output: 134


Script:

#!/bin/bash
# Description: Verify the handling of model selection and provider assignment.

# Test: Search for the handling logic. Expect: Correct handling of model selection and provider assignment.
rg 'modelConfig' --type ts --type tsx --type js --type jsx

Length of output: 86


Script:

#!/bin/bash
# Description: Verify the handling of model selection and provider assignment.

# Test: Search for the handling logic. Expect: Correct handling of model selection and provider assignment.
rg 'modelConfig' -g '*.ts' -g '*.tsx' -g '*.js' -g '*.jsx'

Length of output: 12141


Script:

#!/bin/bash
# Description: Extract and review relevant code snippets from app/store/chat.ts to ensure the correct handling of model selection and provider assignment.

# Extract lines where modelConfig is used in app/store/chat.ts
rg 'modelConfig' app/store/chat.ts -n

Length of output: 2572


Script:

#!/bin/bash
# Description: Extract relevant code snippets from app/store/chat.ts for detailed review.

# Extract relevant code snippets around identified lines
sed -n -e '117,130p' -e '220,230p' -e '313,323p' -e '346,356p' -e '367,379p' -e '442,454p' -e '476,488p' -e '501,511p' -e '548,560p' -e '578,590p' -e '600,630p' -e '640,650p' -e '698,710p' -e '719,730p' app/store/chat.ts

Length of output: 7170

app/components/chat.tsx (3)

580-599: Ensure correct handling of model selection and provider assignment.

The logic for handling model selection and provider assignment should be verified to ensure correctness.

Verification successful

The handling of model selection and provider assignment appears to be correctly implemented.

The logic for updating the modelConfig with the selected model and provider name is present and used consistently throughout the codebase.

  • app/components/chat.tsx: The modelConfig is updated with the selected model and provider name.
  • Other files also show consistent usage and updates to the modelConfig.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the handling of model selection and provider assignment.

# Test: Search for the handling logic. Expect: Correct handling of model selection and provider assignment.
rg --type-add 'ts' --type-add 'tsx' --type-add 'js' --type-add 'jsx' 'modelConfig'

Length of output: 134


Script:

#!/bin/bash
# Description: Verify the handling of model selection and provider assignment.

# Test: Search for the handling logic. Expect: Correct handling of model selection and provider assignment.
rg --glob '*.ts' --glob '*.tsx' --glob '*.js' --glob '*.jsx' 'modelConfig'

Length of output: 12157


452-454: Verify the logic for providerName in model selector.

Ensure that all possible values of providerName are correctly handled when switching models.

Verification successful

The logic for providerName in the model selector is correctly implemented.

The codebase consistently handles all possible values of providerName ("OpenAI", "Azure", "Google", "Anthropic", and "Custom") across various files.

  • app/constant.ts
  • app/components/chat.tsx
  • app/components/model-config.tsx
  • app/components/home.tsx
  • app/components/exporter.tsx
  • app/utils/model.ts
  • app/store/config.ts
  • app/store/chat.ts
  • app/client/api.ts
  • app/client/platforms/openai.ts
  • app/client/platforms/anthropic.ts
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all possible values for `providerName` are handled correctly.

# Test: Search for all possible values of `providerName`. Expect: Only valid values are used.
rg --type-add 'ts' --type-add 'tsx' --type-add 'js' --type-add 'jsx' 'providerName'

Length of output: 135


Script:

#!/bin/bash
# Description: Verify all possible values for `providerName` are handled correctly.

# Correcting the --type-add syntax to search for providerName across relevant files.
rg --type-add 'ts:*.ts' --type-add 'tsx:*.tsx' --type-add 'js:*.js' --type-add 'jsx:*.jsx' 'providerName'

Length of output: 3612


486-492: Ensure correct model and provider assignment.

The logic for updating the session with the new model and provider should be verified to ensure correctness.

Verification successful

The logic for updating the session with the new model and provider is correctly implemented.

  • The assignment of nextModel.name to session.mask.modelConfig.model and nextModel?.provider?.providerName to session.mask.modelConfig.providerName ensures that both the model and provider are updated correctly.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the logic for updating the session with the new model and provider.

# Test: Search for the update logic. Expect: Correct assignment of model and provider.
rg --type-add 'ts' --type-add 'tsx' --type-add 'js' --type-add 'jsx' 'updateCurrentSession'

Length of output: 143


Script:

#!/bin/bash
# Description: Verify the logic for updating the session with the new model and provider.

# Test: Search for the update logic. Expect: Correct assignment of model and provider.
rg --type ts --type tsx --type js --type jsx 'updateCurrentSession'

Length of output: 95


Script:

#!/bin/bash
# Description: Verify the logic for updating the session with the new model and provider.

# Test: Search for the update logic. Expect: Correct assignment of model and provider.
rg --type ts --type js 'updateCurrentSession'

Length of output: 1669


Script:

#!/bin/bash
# Description: Extract relevant lines from app/components/chat.tsx to verify the logic for updating the session with the new model and provider.

# Extract lines around the instances of updateCurrentSession in app/components/chat.tsx
rg --type ts --type js -A 5 -B 5 'updateCurrentSession' app/components/chat.tsx

Length of output: 5267

app/store/chat.ts Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 6b1b530 and d9758be.

Files selected for processing (2)
  • app/api/common.ts (2 hunks)
  • app/client/platforms/openai.ts (5 hunks)
Files skipped from review as they are similar to previous changes (1)
  • app/api/common.ts
Additional comments not posted (4)
app/client/platforms/openai.ts (4)

8-8: Imports look good.

The new imports for Azure and ServiceProvider are necessary for the added logic.


144-172: LGTM!

The changes in the fetchEventSource logic correctly handle Azure's text moderation results. The logic looks sound.


67-91: LGTM! Ensure Azure configuration is correct.

The changes in the path method correctly identify and handle Azure requests. Verify that the Azure configuration is complete and valid.

Verification successful

LGTM! Ensure Azure configuration is correct.

The changes in the path method correctly identify and handle Azure requests. The isValidAzure method validates the presence of essential Azure configuration properties (azureUrl, azureApiKey, azureApiVersion). Ensure that these properties are correctly set in your settings.

  • app/store/access.ts: isValidAzure method implementation.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that Azure configuration is complete and valid.

# Test: Search for the Azure configuration validation. Expect: Proper validation logic.
rg --type python 'isValidAzure'

Length of output: 62


Script:

#!/bin/bash
# Description: Verify that Azure configuration is complete and valid.

# Test: Search for the Azure configuration validation in TypeScript files. Expect: Proper validation logic.
rg --type ts 'isValidAzure'

Length of output: 202


Script:

#!/bin/bash
# Description: Extract the implementation of the isValidAzure method.

# Extract the isValidAzure method implementation from app/store/access.ts
rg --type ts --context 10 'isValidAzure()' app/store/access.ts

Length of output: 1006


Line range hint 116-170:
LGTM! Verify model collection logic.

The changes in the chat method correctly handle the provider-specific API paths. Verify that the model collection logic is accurate and complete.

Verification successful

LGTM! Verify model collection logic.

The changes in the chat method correctly handle the provider-specific API paths. The model collection logic, as implemented in collectModelsWithDefaultModel, collectModelTableWithDefaultModel, and collectModelTable, is accurate and complete.

  • collectModelsWithDefaultModel is located in app/utils/model.ts (lines 107-119).
  • collectModelTableWithDefaultModel is located in app/utils/model.ts (lines 77-92).
  • collectModelTable is located in app/utils/model.ts (lines 10-75).
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the model collection logic is accurate and complete.

# Test: Search for the model collection function. Expect: Proper model collection logic.
rg --type python 'collectModelsWithDefaultModel'

Length of output: 79


Script:

#!/bin/bash
# Description: Verify that the model collection logic is accurate and complete.

# Test: Search for the model collection function. Expect: Proper model collection logic.
rg --type ts 'function collectModelsWithDefaultModel'

Length of output: 121


Script:

#!/bin/bash
# Extract the implementation of the collectModelsWithDefaultModel function from app/utils/model.ts
ast-grep --lang typescript --pattern 'function collectModelsWithDefaultModel($$$) { $$$ }'

Length of output: 725


Script:

#!/bin/bash
# Extract the implementation of the collectModelTableWithDefaultModel function from app/utils/model.ts
ast-grep --lang typescript --pattern 'function collectModelTableWithDefaultModel($$$) { $$$ }'

Length of output: 893


Script:

#!/bin/bash
# Extract the implementation of the collectModelTable function from app/utils/model.ts
ast-grep --lang typescript --pattern 'function collectModelTable($$$) { $$$ }'

Length of output: 3516

app/api/auth.ts Show resolved Hide resolved
app/api/common.ts Show resolved Hide resolved
app/api/common.ts Outdated Show resolved Hide resolved
app/api/common.ts Show resolved Hide resolved
app/client/api.ts Show resolved Hide resolved
app/components/chat.tsx Show resolved Hide resolved
app/components/exporter.tsx Show resolved Hide resolved
app/components/model-config.tsx Show resolved Hide resolved
app/constant.ts Show resolved Hide resolved
next.config.mjs Show resolved Hide resolved
Copy link

vercel bot commented Jul 5, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
nextchat ❌ Failed (Inspect) Jul 5, 2024 0:53am

@lloydzhou lloydzhou self-assigned this Jul 5, 2024
@lloydzhou lloydzhou requested a review from Dogtiti July 5, 2024 12:50
@lloydzhou lloydzhou added the planned planned feature, will support in the future label Jul 5, 2024
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between d9758be and 183ad2a.

Files selected for processing (1)
  • app/api/azure/[...path]/route.ts (1 hunks)
Additional comments not posted (5)
app/api/azure/[...path]/route.ts (5)

1-6: Imports look good.

The imported modules and utilities appear necessary and relevant for the functionality provided.


8-18: Initial part of the handle function looks good.

The function correctly logs parameters, handles OPTIONS method, and constructs the subpath.


20-25: Authentication and error handling look good.

The function correctly performs authentication and handles errors appropriately.


27-32: Request handling and error logging look good.

The function correctly makes a request using requestOpenai and handles errors appropriately.


35-57: Export statements and runtime configuration look good.

The file correctly exports the handle function for GET and POST methods, specifies runtime as "edge", and lists preferred regions.

app/client/api.ts Outdated Show resolved Hide resolved
app/api/common.ts Outdated Show resolved Hide resolved
@@ -140,7 +141,33 @@ export class ChatGPTApi implements LLMApi {
options.onController?.(controller);

try {
const chatPath = this.path(OpenaiPath.ChatPath);
let chatPath = "";
if (modelConfig.providerName == ServiceProvider.Azure) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那么这里最新的azure 配置是什么样 需要更新一下readme

@lloydzhou
Copy link
Contributor Author

#4804

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


#4804

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 183ad2a and 14bc1b6.

Files selected for processing (3)
  • app/client/api.ts (5 hunks)
  • app/client/platforms/openai.ts (5 hunks)
  • app/constant.ts (4 hunks)
Files skipped from review as they are similar to previous changes (3)
  • app/client/api.ts
  • app/client/platforms/openai.ts
  • app/constant.ts

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 14bc1b6 and 6dc4844.

Files selected for processing (2)
  • app/api/common.ts (2 hunks)
  • app/client/platforms/openai.ts (5 hunks)
Files skipped from review as they are similar to previous changes (2)
  • app/api/common.ts
  • app/client/platforms/openai.ts

@Dogtiti Dogtiti self-requested a review July 5, 2024 16:11
@Dogtiti
Copy link
Member

Dogtiti commented Jul 6, 2024

#3344

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


#3344

@Dogtiti Dogtiti merged commit 2d1f522 into ChatGPTNextWeb:main Jul 6, 2024
0 of 2 checks passed
@lloydzhou
Copy link
Contributor Author

lloydzhou commented Jul 8, 2024

Hi @Dogtiti @lloydzhou 我遇到了和 @itcodes 相同的问题。我在#4934 (comment) 提及了。

具体来说,这次改动在环境变量明确给定了gpt-4o的情况下,会出现下图中的两个选项,而默认情况下它会选择openai。手动改成azure才能解决问题。

我认为这不符合直觉,特别是在环境变量明确给定了AZURE_URL的情况下。或者说能否提供下providerName and displayName的相关文档,这样可以显式地在CUSTOM_MODELS里配置gpt-4o为gpt-4o(Azure)=gpt-4o

两个问题

  1. 配置customModels的情况下,同时出现openai和azure,确实不合理,这个需要解决(不过这个只是展示问题,并不影响请求处理失败的逻辑)

但是之前解决的时候遇到问题,因为AZURE_URL是在server config这边进行的配置,这个配置并不会被发送到前端,所以前端页面不太好判断
另外就是代码需要考虑仅在app端运行的情况,app内运行的时候并没有node server也无法通过env文件中的配置进行处理
所以,后面会考虑一下如何在不同的环境都能准确的判断,并且合理的展示

  1. 需要检查一下,在什么配置下会出现azure的请求错误发送到/api/openai以及出现deployment not found的错误信息!这个是当前优先级最高的问题。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Hi @Dogtiti @lloydzhou I encountered the same problem as @itcodes. I mentioned it at #4934 (comment).

Specifically, in this change, when the environment variable explicitly specifies gpt-4o, the two options in the figure below will appear, and by default it will select openai. Manually changing to Azure can solve the problem.

I don't think this is intuitive, especially if the AZURE_URL is explicitly given by the environment variable. Or can you provide relevant documents for providerName and displayName, so that gpt-4o can be explicitly configured in CUSTOM_MODELS as gpt-4o(Azure)=gpt-4o

![](https://private-user-images.githubusercontent.com/8282645/346348161-20d772ac-7798-4a5b-8dab-294d1ecb7e94.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJ naXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA0MjU1NjcsIm5iZiI6MTcyMDQyNTI2NywicGF0aCI6Ii84MjgyNjQ1Lz M0NjM0ODE2MS0yMGQ3NzJhYy03Nzk4LTRhNWItOGRhYi0yOTRkMWVjYjdlOTQucG5nP1gtQW 16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0R ZTFNBNTNQUUs0WkElMkYyMDI0MDcwOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0Jl gtQW16LURhdGU9MjAyNDA3MDhUMDc1NDI3WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYX R1cmU9MWNjY2I0NTVmYzQ0NGI2ZDQyOGM4Y2FhZmU5NmI2Nzk2OTdmZmRlNzY3OGE2MmQ4ODQ4OWY4ZTRiMjdjMmIxYiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZ rZXlfaWQ9MCZyZXBvX2lkPTAifQ.zSmnkGdI_xGsRdsdIeb3FMZsp4yeGWcXUUZcRPwuSws)

Two questions

  1. When customModels is configured, openai and azure appear at the same time, which is really unreasonable and needs to be solved (but this is just a display problem and does not affect the logic of request processing failure)

But I encountered a problem when solving it before. Because AZURE_URL is configured on the server config side, this configuration will not be sent to the front end, so the front end page is not easy to judge.
In addition, the code needs to be considered when running only on the app side. When running in the app, there is no node server and it cannot be processed through the configuration in the env file.
Therefore, we will consider later how to accurately judge and display reasonably in different environments.

  1. You need to check under what configuration the azure request error will be sent to /api/openai and the error message of deployment not found will appear! ! ! This is the highest priority issue at the moment.

@CoreJa
Copy link

CoreJa commented Jul 8, 2024

具体来说,这次改动在环境变量明确给定了gpt-4o的情况下,会出现下图中的两个选项,而默认情况下它会选择openai。手动改成azure才能解决问题。

而默认情况下它会选择openai。

默认情况下它会选择openai,所以,在默认情况下(选择了openai的情况下)它会将azure的请求错误的发送到/api/openai,从而404.

手动改成azure才能解决问题。

手动改成azure才能解决问题。在手动改成了azure后顺利解决问题。

烦请看reply仔细一点,少打几个!!!缓和语气。此外你要的config配置信息也在另一个pr #4934 下提供了。烦请多看看,thx.

Appreciate your help and efforts!

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Specifically, in this change, when the environment variable explicitly specifies gpt-4o, the two options in the figure below will appear, and by default it will select openai. Manually changing to Azure can solve the problem.

And by default it will select openai.

By default it will select openai, so by default (when openai is selected) it will incorrectly send the azure request to /api/openai, resulting in 404.

Manually changing to Azure can solve the problem.

Manually changing to Azure can solve the problem. The problem was successfully solved after manually changing it to Azure.

Please read the reply carefully and avoid typing a few! ! ! Soften your tone. In addition, the config configuration information you want is also provided under another PR #4934. Please read more, thx.

Appreciate your help and efforts!

@lloydzhou
Copy link
Contributor Author

烦请看reply仔细一点,少打几个!!!缓和语气。

非常抱歉。(这几个感叹号更多的是给我自己看的,我已经更新了那个comment)

此外你要的config配置信息也在另一个pr #4934 下提供了。烦请多看看

刚才确实没有关注到这边有更新,我现在就去看

针对不同服务商提供相同模型,导致模型选择出现多个的请求,我想这里还需要讨论一个比较完善的方案(除了azure提供openai的模型,其他的一些服务商也会出现类似问题,例如aws bedrock,或者好几个issue提到的 cloudflare AI Gateway)

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Please read the reply carefully and avoid typing a few! ! ! Soften your tone.

terribly sorry. (These exclamation points are more for myself, I have updated the comment)

In addition, the config configuration information you want is also provided under another pr #4934. Please read more

I really didn’t pay attention to the updates here just now. I’ll check them out now.

Different service providers provide the same model, resulting in multiple requests for model selection. I think a more complete solution needs to be discussed here (in addition to Azure's openai model, similar problems will also occur with some other service providers, such as aws bedrock , or the cloudflare AI Gateway mentioned in several issues)

@CoreJa
Copy link

CoreJa commented Jul 8, 2024

理解。

我同意你的看法,我考虑的做法是用CUSTOM_MODELS做文章,具体的,你可以新增一个parser,目前的语法是model_name=display_name,那么考虑model_name<provider_name>=display_name这样的格式去解析这个环境变量。

此外<>内可以以键值对的形式扩展其它参数,例如model_name<provider_name,max_tokens=xxx,param1=value1,...>=display_name这样的形式(参考隔壁的lobechat)

我理解这个做法对目前的侵入性是比较低的,可以考虑一下。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


understand.

I agree with you. The approach I am considering is to use CUSTOM_MODELS. Specifically, you can add a new parser. The current syntax is model_name=display_name, then consider the format model_name<provider_name>=display_name. Parse this environment variable.

In addition, other parameters can be expanded in the form of key-value pairs in <>, such as model_name<provider_name,max_tokens=xxx,param1=value1,...>=display_name (refer to the lobechat next door)

I understand that this approach is relatively less intrusive at present, and I might consider it.

@lloydzhou
Copy link
Contributor Author

其实中间和另一个开发者讨论过使用 model_name@provider=display_name,然后display_name可以作为deployment_nameazure豆包都需要使用deployment_name

关于你提到的<key=value,...>的语法,这里既然在<>中间使用看kv的模式命令,那么最后的这个等于是不是就没存在的必要了,只需要放到括号中间即可

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


In fact, I discussed with another developer about using model_name@provider=display_name, and then display_name can be used as deployment_name (both azure and 豆包 need to use deployment_name)

Regarding the syntax of <key=value,...> you mentioned, since the kv mode command is used in the middle of <>, then the last one is no longer necessary. It only needs to be Just put it between the brackets

@CoreJa
Copy link

CoreJa commented Jul 8, 2024

我没理解“最后的这个等于没有存在的必要”这句话。是指不需要display_name吗?

display_name我觉得还是可以存在的,没必要干掉。我没有找到“必须不能删”的理由,但一个不太寻常的case是,我在同一个azure账户上deploy了多个gpt-3.5-turbo的model,以供不同用户组使用。但我希望他们在NextChat上显示的都是相同的名字gpt-3.5-turbo。

或者是我在本地部署的ollama或者其他openai api兼容的provider,model_name应该是一个identity,但也许它太丑了/太长了,例如Qwen2-57B-A14B-Instruct-q4_K_M,我希望display_name可以是简单的qwen2-57B。

或者说考虑向上兼容的问题,正在使用的许多用户可能已经指定了display_name,而且就是按照目前的model_name=display_name的格式,干掉它会带来不必要的不兼容问题,造成较大的侵入性。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


I don't understand the sentence "The last one means there is no need to exist." Does it mean that display_name is not needed?

I think display_name can still exist and there is no need to kill it. I didn't find a reason why "it must not be deleted", but an unusual case is that I deployed multiple gpt-3.5-turbo models on the same azure account for use by different user groups. But I want them to all show up on NextChat with the same name gpt-3.5-turbo.

Or my locally deployed ollama or other openai api compatible provider, model_name should be an identity, but maybe it's too ugly/long, for example Qwen2-57B-A14B-Instruct-q4_K_M, I want display_name `could be simply qwen2-57B.

In other words, considering the issue of upward compatibility, many users who are currently using it may have specified display_name, and according to the current model_name=display_name format, removing it will bring unnecessary incompatibility problems and cause larger problems. Invasive.

@itcodes
Copy link

itcodes commented Jul 8, 2024

我没理解“最后的这个等于没有存在的必要”这句话。是指不需要display_name吗?

display_name我觉得还是可以存在的,没必要干掉。我没有找到“必须不能删”的理由,但一个不太寻常的case是,我在同一个azure账户上deploy了多个gpt-3.5-turbo的model,以供不同用户组使用。但我希望他们在NextChat上显示的都是相同的名字gpt-3.5-turbo。

或者是我在本地部署的ollama或者其他openai api兼容的provider,model_name应该是一个identity,但也许它太丑了/太长了,例如Qwen2-57B-A14B-Instruct-q4_K_M,我希望display_name可以是简单的qwen2-57B。

或者说考虑向上兼容的问题,正在使用的许多用户可能已经指定了display_name,而且就是按照目前的model_name=display_name的格式,干掉它会带来不必要的不兼容问题,造成较大的侵入性。

确实。我一段时间deployment_name一般会跟model_name一样,因为如果用其他工具,这俩不一样,有时候会不识别,nextweb 算是最早开始解决这个问题的产品了。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


I don't understand the sentence "The last one means there is no need to exist". Does it mean that display_name is not needed?

display_name I think can still exist, there is no need to kill it. I didn't find a reason why "it must not be deleted", but an unusual case is that I deployed multiple gpt-3.5-turbo models on the same azure account for use by different user groups. But I want them to all show up on NextChat with the same name gpt-3.5-turbo.

Or my locally deployed ollama or other openai api compatible provider, model_name should be an identity, but maybe it's too ugly/long, e.g. Qwen2-57B-A14B-Instruct-q4_K_M, I hope display_name can be simply qwen2-57B.

In other words, considering the issue of upward compatibility, many users who are currently using it may have specified display_name, and according to the current model_name=display_name format, removing it will bring unnecessary incompatibility problems and cause greater of invasiveness.

really. For a while, deployment_name is usually the same as model_name, because if I use other tools, the two are different and sometimes they are not recognized. Nextweb is the first product to solve this problem.

@lloydzhou
Copy link
Contributor Author

我没理解“最后的这个等于没有存在的必要”这句话。是指不需要display_name吗?

我的意思是:使用<>的语法之后,其实可以把display_name放到括号中间(model_name<provider,display_name=xxxname,max_tokens=2048,...>)这样就不需要在外面这一层级放一个=display_name

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


I don't understand the sentence "The last one means there is no need to exist". Does it mean that display_name is not required?

What I mean is: after using the <> syntax, you can actually put the display_name between the brackets (model_name<provider, display_name=xxxname, max_tokens=2048,...>) so that there is no need to add this outside Put a =display_name at the level

@CoreJa
Copy link

CoreJa commented Jul 8, 2024

注意到 #4930 (comment) 这里最后一点提及的

或者说考虑向上兼容的问题,正在使用的许多用户可能已经指定了display_name,而且就是按照目前的model_name=display_name的格式,干掉它会带来不必要的不兼容问题,造成较大的侵入性。

绝大部分用户是希望或者默认普通的升级是无感知的,或者至少不会带来兼容性问题(例如python2 -> python 3)的灾难。当然不向前兼容也可以提前几个大版本标记这个写法deprecated也不失是一种方法,只是我认为这个改动不应造成这个程度的侵入性。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Note the last point mentioned here at #4930 (comment)

In other words, considering the issue of upward compatibility, many users who are currently using it may have specified display_name, and according to the current format of model_name=display_name, removing it will bring unnecessary incompatibility problems and cause greater intrusion.

Most users hope or default that ordinary upgrades will be imperceptible, or at least will not cause disasters such as compatibility issues (such as python2 -> python 3). Of course, if you are not forward-compatible, you can also mark it several major versions in advance. This writing method deprecated is also a method, but I think this change should not cause this level of intrusion.

@lloydzhou
Copy link
Contributor Author

或许前面提到的model_name@provider=display_name是一个短期内侵入性比较小的方案。

这里之所以引入@是类似邮箱名称的模式,@后面表示Service Provider

另外:

但一个不太寻常的case是,我在同一个azure账户上deploy了多个gpt-3.5-turbo的model,以供不同用户组使用。但我希望他们在NextChat上显示的都是相同的名字gpt-3.5-turbo。

这个可能再扩展一下也能处理(不过感觉改动有点多):

  1. @后面支持Service Provider或者Service Provider alias
  2. 再新增一个CUSTOM_PROVIDER,通过这个去指定一个Service Provider alias应该使用哪一种类型的Provider最终应该把请求发到哪里去。
    例如:
    CUSTOM_PROVIDER=Azure1@Azure=https://{apikey}%40{resource-name}.openai.azure.com/openai/deployments
    CUSTOM_MODELS=-all,+gpt-3.5-turbo@Azure1=gpt35
    这种模式下,前端会展示gpt35(Azure1),选择这个模型之后;前端页面会按照Provider=Azure的模式处理messages以及模型参数;并使用gpt35作为deployment_name以及Azure1作为provider_name向node server发请求;后端的node server会按照CUSTOM_PROVIDER的配置,将请求转发到https://{apikey}@{resource-name}.openai.azure.com/openai/deployments这个服务。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Perhaps the model_name@provider=display_name mentioned earlier is a less invasive solution in the short term.

The reason why @ is introduced here is a pattern similar to the mailbox name, and what follows @ means Service Provider

in addition:

But an unusual case is that I deployed multiple gpt-3.5-turbo models on the same azure account for use by different user groups. But I want them to all show up on NextChat with the same name gpt-3.5-turbo.

This may be able to be handled by expanding it (but I feel like the changes are a bit too much):

  1. @ is followed by Service Provider or Service Provider alias
  2. Add a new CUSTOM_PROVIDER, and use this to specify which type of Provider should be used by a Service Provider alias and where the request should be sent.
    For example:
    CUSTOM_PROVIDER=Azure1@Azure=https://{apikey}%40{resource-name}.openai.azure.com/openai/deployments
    CUSTOM_MODELS=-all,+gpt-3.5-turbo@Azure1=gpt35
    In this mode, the front-end will display gpt35(Azure1). After selecting this model; the front-end page will process messages and model parameters according to the Provider=Azure mode; and use gpt35 as deployment_name and Azure1 Send a request to the node server as provider_name; the backend node server will forward the request to https://{apikey}@{resource-name}.openai.azure.com/openai/deployments according to the configuration of CUSTOM_PROVIDER This service.

@lloydzhou
Copy link
Contributor Author

或许前面提到的model_name@provider=display_name是一个短期内侵入性比较小的方案。

这里之所以引入@是类似邮箱名称的模式,@后面表示Service Provider

另外:

但一个不太寻常的case是,我在同一个azure账户上deploy了多个gpt-3.5-turbo的model,以供不同用户组使用。但我希望他们在NextChat上显示的都是相同的名字gpt-3.5-turbo。

这个可能再扩展一下也能处理(不过感觉改动有点多):

  1. @后面支持Service Provider或者Service Provider alias
  2. 再新增一个CUSTOM_PROVIDER,通过这个去指定一个Service Provider alias应该使用哪一种类型的Provider最终应该把请求发到哪里去。
    例如:
    CUSTOM_PROVIDER=Azure1@Azure=https://{apikey}%40{resource-name}.openai.azure.com/openai/deployments
    CUSTOM_MODELS=-all,+gpt-3.5-turbo@Azure1=gpt35
    这种模式下,前端会展示gpt35(Azure1),选择这个模型之后;前端页面会按照Provider=Azure的模式处理messages以及模型参数;并使用gpt35作为deployment_name以及Azure1作为provider_name向node server发请求;后端的node server会按照CUSTOM_PROVIDER的配置,将请求转发到https://{apikey}@{resource-name}.openai.azure.com/openai/deployments这个服务。

使用这里的CUSTOM_PROVIDER模式,应该可以通过配置的方式解决像one-api这种中转平台的问题:

CUSTOM_PROVIDER=oneapi@OpenAI=https://{one_api_key}@{oneapi.xxx.com}
CUSTOM_MODELS=-all,+gpt-3.5-turbo@oneapi=gpt35
这样就相当于使用自定义的名字叫做oneapiprovider,这个provider的逻辑和OpenAI的逻辑一样的,只是往node server发请求的时候的路径是 /api/oneapi/v1/chat/completions,node server会找到one_api_key以及oneapi.xxx.com转发请求

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Perhaps the aforementioned model_name@provider=display_name is a less invasive solution in the short term.

The reason why @ is introduced here is a pattern similar to the mailbox name. The back of @ means Service Provider

Additionally:

But an unusual case is that I deployed multiple gpt-3.5-turbo models on the same azure account for use by different user groups. But I want them to all show up on NextChat with the same name gpt-3.5-turbo.

This may be able to be handled by expanding it (but it feels like there are a lot of changes):

  1. @ is followed by Service Provider or Service Provider alias
  2. Add a new CUSTOM_PROVIDER, and use this to specify which type of Provider should be used by a Service Provider alias and where the request should be sent.
    For example:
    CUSTOM_PROVIDER=Azure1@Azure=https://{apikey}%40{resource-name}.openai.azure.com/openai/deployments
    CUSTOM_MODELS=-all,+gpt-3.5-turbo@Azure1=gpt35
    In this mode, the front-end will display gpt35(Azure1). After selecting this model; the front-end page will process messages and model parameters according to the Provider=Azure mode; and use gpt35 as deployment_name and Azure1 Send a request to the node server as provider_name; the backend node server will forward the request to https://{apikey}@{resource-name}.openai.azure.com/openai/ according to the configuration of CUSTOM_PROVIDER deploymentsthis service.

Using the CUSTOM_PROVIDER mode here, you should be able to solve the problem of a transit platform like one-api through configuration:

CUSTOM_PROVIDER=oneapi@OpenAI=https://{one_api_key}@{oneapi.xxx.com}
CUSTOM_MODELS=-all,+gpt-3.5-turbo@oneapi=gpt35
This is equivalent to using a custom provider named oneapi. The logic of this provider is the same as that of OpenAI, except that the path when sending a request to the node server is /api/oneapi/ v1/chat/completions, the node server will find one_api_key and oneapi.xxx.com to forward the request

lloydzhou added a commit to ConnectAI-E/ChatGPT-Next-Web that referenced this pull request Jul 8, 2024
@itcodes
Copy link

itcodes commented Jul 9, 2024

@lloydzhou 已经明确是 azure_url 的时候,还出现 gpt-4o(openai)这个 bug 找到原因了吗?大概多久可以发布。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


@lloydzhou When it is clear that it is azure_url, the gpt-4o (openai) bug still appears. Have you found the reason? How long will it take to publish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
planned planned feature, will support in the future
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants