Skip to content

Conversation

pwerry
Copy link

@pwerry pwerry commented Aug 7, 2025

Q A
Bug fix? no
New feature? yes
BC breaks? no
Related Issue N/A

Describe your change

This PR adds support for the new verbosity parameter in ChatCompletionRequest, which was introduced with GPT-5 and allows controlling response length and detail.

Implementation details:

  • Created Verbosity value class with predefined Low, Medium, High options
  • Added verbosity parameter to ChatCompletionRequest and ChatCompletionRequestBuilder
  • Follows the same pattern as existing reasoningEffort parameter
  • Added comprehensive tests and updated CHANGELOG.md

Usage:

val request = chatCompletionRequest {
    model = ModelId("gpt-4o")
    verbosity = Verbosity.Low  // or Medium, High
    messages { /* ... */ }
}

What problem is this fixing?

The OpenAI API recently introduced a verbosity parameter for Chat Completions and Responses API endpoints. This parameter allows developers to control how concise or verbose the model's responses are:

  • low: Terse, concise responses
  • medium: Balanced responses (default)
  • high: Detailed, comprehensive responses

Without this parameter, developers cannot take advantage of this new API feature to optimize response length for their specific use cases.

- Add Verbosity value class with Low, Medium, High options
- Add verbosity parameter to ChatCompletionRequest and builder
- Add comprehensive tests for verbosity functionality
- Update CHANGELOG.md with new feature

The verbosity parameter allows controlling response length and detail:
- Low: terse, concise responses
- Medium: balanced responses (default)
- High: detailed, comprehensive responses

Follows the same pattern as existing reasoningEffort parameter.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant