Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 调用OpenAI o1-preview模型的API接口提示 'frequency_penalty' is not supported with this model.", #5053

Closed
734908350 opened this issue Dec 17, 2024 · 5 comments · Fixed by #5183
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@734908350
Copy link

📦 Environment

Docker

📌 Version

1.0

💻 Operating System

macOS

🌐 Browser

Chrome

🐛 Bug Description

1734423189062

📷 Recurrence Steps

No response

🚦 Expected Behavior

No response

📝 Additional Information

No response

@734908350 734908350 added the 🐛 Bug Something isn't working | 缺陷 label Dec 17, 2024
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Dec 17, 2024
@lobehubbot
Copy link
Member

👀 @734908350

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Dec 17, 2024

Hey @734908350! 👋 I'm here to help you with bug fixes, answer questions, and even guide you on how to become a contributor. Let's squash this bug together!

To resolve the error "frequency_penalty is not supported with this model" when calling the OpenAI o1-preview model API, ensure that the frequency_penalty is set to 0 in the payload for the o1-preview model. The code should include a function like pruneO1Payload that sets frequency_penalty to 0 for models in the o1Models set, which includes 'o1-preview'. Here is a relevant code snippet:

export const pruneO1Payload = (payload: ChatStreamPayload) => ({
  ...payload,
  frequency_penalty: 0, // Ensure this is set to 0
  messages: payload.messages.map((message: OpenAIChatMessage) => ({
    ...message,
    role: message.role === 'system' ? 'user' : message.role,
  })),
  presence_penalty: 0,
  temperature: 1,
  top_p: 1,
});

Make sure that this function is applied to your payload before making the API call. Additionally, ensure that the handlePayload function in LobeOpenAI is correctly using pruneO1Payload for the 'o1-preview' model:

if (o1Models.has(model)) {
  return pruneO1Payload(payload) as any;
}

This should prevent the error by ensuring that the frequency_penalty is set to 0 for the o1-preview model [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@734908350
Copy link
Author

Uploading 1734423359132.jpg…

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Uploading 1734423359132.jpg…

@github-project-automation github-project-automation bot moved this from Roadmap - Chat 1.x to Done in Lobe Chat Routine Dec 26, 2024
@lobehubbot
Copy link
Member

@734908350

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants