Skip to content

Commit

Permalink
Add common parameters for databases
Browse files Browse the repository at this point in the history
  • Loading branch information
sfc-gh-jcieslak committed Jun 10, 2024
1 parent b804c0c commit 4d13d80
Show file tree
Hide file tree
Showing 48 changed files with 1,561 additions and 1,599 deletions.
4 changes: 2 additions & 2 deletions MIGRATION_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ From now on, please migrate and use the new database resources for their unique
The split was done (and will be done for several objects during the refactor) to simplify the resource on maintainability and usage level.
Its purpose was also to divide the resources by their specific purpose rather than cramping every use case of an object into one resource.

### *(behavior change)* snowflake_databases
### *(behavior change)* snowflake_databases datasource
- `terse` and `history` fields were removed.
- `replication_configuration` field was removed from `databases`.
- `pattern` was replaced by `like` field.
- Additional filtering options added (`limit`).
- Added missing fields returned by SHOW DATABASES>
- Added missing fields returned by SHOW DATABASES.
- Added outputs from DESC DATABASE and SHOW PARAMETERS IN DATABASE (they can be turned off by declaring `with_describe = false` and `with_parameters = false`, **they're turned on by default**).

## v0.89.0 ➞ v0.90.0
Expand Down
63 changes: 54 additions & 9 deletions docs/data-sources/databases.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,60 @@ description: |-
## Example Usage

```terraform
data "snowflake_databases" "test" {
with_describe = false
with_parameters = false
like = "database-name"
starts_with = "database-name"
# Simple usage
data "snowflake_databases" "simple" {
}
output "simple_output" {
value = data.snowflake_databases.simple.databases
}
# Filtering (like)
data "snowflake_databases" "like" {
like = "database-name"
}
output "like_output" {
value = data.snowflake_databases.like.databases
}
# Filtering (starts_with)
data "snowflake_databases" "starts_with" {
starts_with = "database-"
}
output "starts_with_output" {
value = data.snowflake_databases.starts_with.databases
}
# Filtering (limit)
data "snowflake_databases" "limit" {
limit {
rows = 20
from = "database-name"
rows = 10
from = "database-"
}
}
output "limit_output" {
value = data.snowflake_databases.limit.databases
}
# Without additional data (to limit the number of calls make for every found database)
data "snowflake_databases" "only_show" {
# with_describe is turned on by default and it calls DESCRIBE DATABASE for every database found and attaches it's output to databases.*.description field
with_describe = false
# with_parameters is turned on by default and it calls SHOW PARAMETERS FOR DATABASE for every database found and attaches it's output to databases.*.parameters field
with_parameters = false
}
output "only_show_output" {
value = data.snowflake_databases.only_show.databases
}
# Ensure the number of databases is equal to at least one element (with the use of postcondition)
data "snowflake_databases" "assert_with_postcondition" {
starts_with = "database-name"
lifecycle {
postcondition {
condition = length(self.databases) > 0
Expand All @@ -30,8 +74,9 @@ data "snowflake_databases" "test" {
}
}
# Ensure the number of databases is equal to at exatly one element (with the use of check block)
check "database_check" {
data "snowflake_databases" "test" {
data "snowflake_databases" "assert_with_check_block" {
like = "database-name"
}
Expand All @@ -48,7 +93,7 @@ check "database_check" {
### Optional

- `like` (String) Filters the output with **case-insensitive** pattern, with support for SQL wildcard characters (`%` and `_`).
- `limit` (Block List, Max: 1) Limits the number of rows returned, while also enabling "pagination" or the results. (see [below for nested schema](#nestedblock--limit))
- `limit` (Block List, Max: 1) Limits the number of rows returned. The limit may start from the first element matched by from which is optional. (see [below for nested schema](#nestedblock--limit))
- `starts_with` (String) Filters the output with **case-sensitive** characters indicating the beginning of the object name.
- `with_describe` (Boolean) Runs DESC DATABASE for each database returned by SHOW DATABASES. The output of describe is saved to the description field. By default this value is set to true.
- `with_parameters` (Boolean) Runs SHOW PARAMETERS FOR DATABASE for each database returned by SHOW DATABASES. The output of describe is saved to the parameters field as a map. By default this value is set to true.
Expand Down
65 changes: 29 additions & 36 deletions docs/resources/secondary_database.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,25 +29,26 @@ resource "snowflake_standard_database" "primary" {
resource "snowflake_secondary_database" "test" {
provider = secondary_account
name = snowflake_standard_database.primary.name # It's recommended to give a secondary database the same name as its primary database
as_replica_of = "<primary_account_organization_name>.<primary_account_name>.${snowflake_standard_database.primary.name}"
is_transient = false
data_retention_time_in_days {
value = 10
}
max_data_extension_time_in_days {
value = 20
}
external_volume = "external_volume_name"
catalog = "catalog_name"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "OPTIMIZED"
log_level = "OFF"
trace_level = "OFF"
comment = "A secondary database"
as_replica_of = "<primary_account_organization_name>.<primary_account_name>.${snowflake_standard_database.primary.name}"
comment = "A secondary database"
data_retention_time_in_days = 10
max_data_extension_time_in_days = 20
external_volume = "<external_volume_name>"
catalog = "<external_volume_name>"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "COMPATIBLE"
log_level = "INFO"
trace_level = "ALWAYS"
suspend_task_after_num_failures = 10
task_auto_retry_attempts = 10
user_task_managed_initial_warehouse_size = "LARGE"
user_task_timeout_ms = 3600000
user_task_minimum_trigger_interval_in_seconds = 120
quoted_identifiers_ignore_case = false
enable_console_output = false
}
```

Expand All @@ -63,35 +64,27 @@ resource "snowflake_secondary_database" "test" {

- `catalog` (String) The database parameter that specifies the default catalog to use for Iceberg tables.
- `comment` (String) Specifies a comment for the database.
- `data_retention_time_in_days` (Block List, Max: 1) Specifies the number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the database, as well as specifying the default Time Travel retention time for all schemas created in the database. For more details, see [Understanding & Using Time Travel](https://docs.snowflake.com/en/user-guide/data-time-travel). (see [below for nested schema](#nestedblock--data_retention_time_in_days))
- `data_retention_time_in_days` (Number) Specifies the number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the database, as well as specifying the default Time Travel retention time for all schemas created in the database. For more details, see [Understanding & Using Time Travel](https://docs.snowflake.com/en/user-guide/data-time-travel).
- `default_ddl_collation` (String) Specifies a default collation specification for all schemas and tables added to the database. It can be overridden on schema or table level. For more information, see [collation specification](https://docs.snowflake.com/en/sql-reference/collation#label-collation-specification).
- `enable_console_output` (Boolean) If true, enables stdout/stderr fast path logging for anonymous stored procedures.
- `external_volume` (String) The database parameter that specifies the default external volume to use for Iceberg tables.
- `is_transient` (Boolean) Specifies the database as transient. Transient databases do not have a Fail-safe period so they do not incur additional storage costs once they leave Time Travel; however, this means they are also not protected by Fail-safe in the event of a data loss.
- `log_level` (String) Specifies the severity level of messages that should be ingested and made available in the active event table. Valid options are: [TRACE DEBUG INFO WARN ERROR FATAL OFF]. Messages at the specified level (and at more severe levels) are ingested. For more information, see [LOG_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-log-level).
- `max_data_extension_time_in_days` (Block List, Max: 1) Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale. For a detailed description of this parameter, see [MAX_DATA_EXTENSION_TIME_IN_DAYS](https://docs.snowflake.com/en/sql-reference/parameters.html#label-max-data-extension-time-in-days). (see [below for nested schema](#nestedblock--max_data_extension_time_in_days))
- `max_data_extension_time_in_days` (Number) Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale. For a detailed description of this parameter, see [MAX_DATA_EXTENSION_TIME_IN_DAYS](https://docs.snowflake.com/en/sql-reference/parameters.html#label-max-data-extension-time-in-days).
- `quoted_identifiers_ignore_case` (Boolean) If true, the case of quoted identifiers is ignored.
- `replace_invalid_characters` (Boolean) Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for an Iceberg table. You can only set this parameter for tables that use an external Iceberg catalog.
- `storage_serialization_policy` (String) Specifies the storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED]. COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines. OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.
- `storage_serialization_policy` (String) The storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED]. COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines. OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.
- `suspend_task_after_num_failures` (Number) How many times a task must fail in a row before it is automatically suspended. 0 disables auto-suspending.
- `task_auto_retry_attempts` (Number) Maximum automatic retries allowed for a user task.
- `trace_level` (String) Controls how trace events are ingested into the event table. Valid options are: [ALWAYS ON_EVENT OFF]. For information about levels, see [TRACE_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-trace-level).
- `user_task_managed_initial_warehouse_size` (String) The initial size of warehouse to use for managed warehouses in the absence of history.
- `user_task_minimum_trigger_interval_in_seconds` (Number) Minimum amount of time between Triggered Task executions in seconds.
- `user_task_timeout_ms` (Number) User task execution timeout in milliseconds.

### Read-Only

- `id` (String) The ID of this resource.

<a id="nestedblock--data_retention_time_in_days"></a>
### Nested Schema for `data_retention_time_in_days`

Required:

- `value` (Number)


<a id="nestedblock--max_data_extension_time_in_days"></a>
### Nested Schema for `max_data_extension_time_in_days`

Required:

- `value` (Number)

## Import

Import is supported using the following syntax:
Expand Down
45 changes: 31 additions & 14 deletions docs/resources/shared_database.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,19 +33,29 @@ resource "snowflake_grant_privileges_to_share" "test" {
# 2. Creating shared database
resource "snowflake_shared_database" "test" {
provider = secondary_account
depends_on = [snowflake_grant_privileges_to_share.test]
name = snowflake_standard_database.test.name # shared database should have the same as the "imported" one
from_share = "<primary_account_organization_name>.<primary_account_name>.${snowflake_share.test.name}"
is_transient = false
external_volume = "external_volume_name"
catalog = "catalog_name"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "OPTIMIZED"
log_level = "OFF"
trace_level = "OFF"
comment = "A shared database"
provider = secondary_account
depends_on = [snowflake_grant_privileges_to_share.test]
name = snowflake_standard_database.test.name # shared database should have the same as the "imported" one
is_transient = false
from_share = "<primary_account_organization_name>.<primary_account_name>.${snowflake_share.test.name}"
comment = "A shared database"
data_retention_time_in_days = 10
max_data_extension_time_in_days = 20
external_volume = "<external_volume_name>"
catalog = "<external_volume_name>"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "COMPATIBLE"
log_level = "INFO"
trace_level = "ALWAYS"
suspend_task_after_num_failures = 10
task_auto_retry_attempts = 10
user_task_managed_initial_warehouse_size = "LARGE"
user_task_timeout_ms = 3600000
user_task_minimum_trigger_interval_in_seconds = 120
quoted_identifiers_ignore_case = false
enable_console_output = false
}
```

Expand All @@ -62,11 +72,18 @@ resource "snowflake_shared_database" "test" {
- `catalog` (String) The database parameter that specifies the default catalog to use for Iceberg tables.
- `comment` (String) Specifies a comment for the database.
- `default_ddl_collation` (String) Specifies a default collation specification for all schemas and tables added to the database. It can be overridden on schema or table level. For more information, see [collation specification](https://docs.snowflake.com/en/sql-reference/collation#label-collation-specification).
- `enable_console_output` (Boolean) If true, enables stdout/stderr fast path logging for anonymous stored procedures.
- `external_volume` (String) The database parameter that specifies the default external volume to use for Iceberg tables.
- `log_level` (String) Specifies the severity level of messages that should be ingested and made available in the active event table. Valid options are: [TRACE DEBUG INFO WARN ERROR FATAL OFF]. Messages at the specified level (and at more severe levels) are ingested. For more information, see [LOG_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-log-level).
- `quoted_identifiers_ignore_case` (Boolean) If true, the case of quoted identifiers is ignored.
- `replace_invalid_characters` (Boolean) Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for an Iceberg table. You can only set this parameter for tables that use an external Iceberg catalog.
- `storage_serialization_policy` (String) Specifies the storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED]. COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines. OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.
- `storage_serialization_policy` (String) The storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED]. COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines. OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.
- `suspend_task_after_num_failures` (Number) How many times a task must fail in a row before it is automatically suspended. 0 disables auto-suspending.
- `task_auto_retry_attempts` (Number) Maximum automatic retries allowed for a user task.
- `trace_level` (String) Controls how trace events are ingested into the event table. Valid options are: [ALWAYS ON_EVENT OFF]. For information about levels, see [TRACE_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-trace-level).
- `user_task_managed_initial_warehouse_size` (String) The initial size of warehouse to use for managed warehouses in the absence of history.
- `user_task_minimum_trigger_interval_in_seconds` (Number) Minimum amount of time between Triggered Task executions in seconds.
- `user_task_timeout_ms` (Number) User task execution timeout in milliseconds.

### Read-Only

Expand Down
Loading

0 comments on commit 4d13d80

Please sign in to comment.