forked from langgenius/dify
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
chore: massive update of the Gemini models based on latest documentat…
…ion (langgenius#8822)
- Loading branch information
Showing
9 changed files
with
338 additions
and
2 deletions.
There are no files selected for viewing
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-flash-001.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-flash-001 | ||
label: | ||
en_US: Gemini 1.5 Flash 001 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 1048576 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-flash-002.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-flash-002 | ||
label: | ||
en_US: Gemini 1.5 Flash 002 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 1048576 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-flash-8b-exp-0924.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-flash-8b-exp-0924 | ||
label: | ||
en_US: Gemini 1.5 Flash 8B 0924 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 1048576 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |
2 changes: 1 addition & 1 deletion
2
api/core/model_runtime/model_providers/google/llm/gemini-1.5-flash-latest.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-flash.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-flash | ||
label: | ||
en_US: Gemini 1.5 Flash | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 1048576 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-pro-001.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-pro-001 | ||
label: | ||
en_US: Gemini 1.5 Pro 001 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 2097152 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-pro-002.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-pro-002 | ||
label: | ||
en_US: Gemini 1.5 Pro 002 | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 2097152 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |
2 changes: 1 addition & 1 deletion
2
api/core/model_runtime/model_providers/google/llm/gemini-1.5-pro-latest.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
48 changes: 48 additions & 0 deletions
48
api/core/model_runtime/model_providers/google/llm/gemini-1.5-pro.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
model: gemini-1.5-pro | ||
label: | ||
en_US: Gemini 1.5 Pro | ||
model_type: llm | ||
features: | ||
- agent-thought | ||
- vision | ||
- tool-call | ||
- stream-tool-call | ||
model_properties: | ||
mode: chat | ||
context_size: 2097152 | ||
parameter_rules: | ||
- name: temperature | ||
use_template: temperature | ||
- name: top_p | ||
use_template: top_p | ||
- name: top_k | ||
label: | ||
zh_Hans: 取样数量 | ||
en_US: Top k | ||
type: int | ||
help: | ||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。 | ||
en_US: Only sample from the top K options for each subsequent token. | ||
required: false | ||
- name: max_tokens_to_sample | ||
use_template: max_tokens | ||
required: true | ||
default: 8192 | ||
min: 1 | ||
max: 8192 | ||
- name: response_format | ||
use_template: response_format | ||
- name: stream | ||
label: | ||
zh_Hans: 流式输出 | ||
en_US: Stream | ||
type: boolean | ||
help: | ||
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。 | ||
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once. | ||
default: false | ||
pricing: | ||
input: '0.00' | ||
output: '0.00' | ||
unit: '0.000001' | ||
currency: USD |