Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Stream on table resource #3109

Merged
merged 12 commits into from
Oct 9, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions MIGRATION_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,15 @@ This document is meant to help you migrate your Terraform config to the new newe
describe deprecations or breaking changes and help you to change your configuration to keep the same (or similar) behavior
across different versions.

## v0.96.0 ➞ v0.97.0

### *(new feature)* snowflake_stream_on_table resource

To enhance clarity and functionality, the new resource `snowflake_stream_on_table` has been introduced to replace the previous `snowflake_stream`. Recognizing that the old resource carried multiple responsibilities within a single entity, we opted to divide it into more specialized resources.
sfc-gh-asawicki marked this conversation as resolved.
Show resolved Hide resolved
The newly introduced resources are aligned with the latest Snowflake documentation at the time of implementation, and adhere to our [new conventions](#general-changes).
This segregation was based on the object on which the stream is created. The mapping between SQL statements and the resources is the following:
- `ON TABLE <table_name>` -> `snowflake_stream_on_table`

## v0.95.0 ➞ v0.96.0

### snowflake_masking_policies data source changes
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/masking_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ EOF
- `describe_output` (List of Object) Outputs the result of `DESCRIBE MASKING POLICY` for the given masking policy. (see [below for nested schema](#nestedatt--describe_output))
- `fully_qualified_name` (String) Fully qualified name of the resource. For more information, see [object name resolution](https://docs.snowflake.com/en/sql-reference/name-resolution).
- `id` (String) The ID of this resource.
- `show_output` (List of Object) Outputs the result of `SHOW MASKING POLICY` for the given masking policy. (see [below for nested schema](#nestedatt--show_output))
- `show_output` (List of Object) Outputs the result of `SHOW MASKING POLICIES` for the given masking policy. (see [below for nested schema](#nestedatt--show_output))

<a id="nestedblock--argument"></a>
### Nested Schema for `argument`
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/row_access_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ resource "snowflake_row_access_policy" "example_row_access_policy" {
- `describe_output` (List of Object) Outputs the result of `DESCRIBE ROW ACCESS POLICY` for the given row access policy. (see [below for nested schema](#nestedatt--describe_output))
- `fully_qualified_name` (String) Fully qualified name of the resource. For more information, see [object name resolution](https://docs.snowflake.com/en/sql-reference/name-resolution).
- `id` (String) The ID of this resource.
- `show_output` (List of Object) Outputs the result of `SHOW ROW ACCESS POLICY` for the given row access policy. (see [below for nested schema](#nestedatt--show_output))
- `show_output` (List of Object) Outputs the result of `SHOW ROW ACCESS POLICIES` for the given row access policy. (see [below for nested schema](#nestedatt--show_output))

<a id="nestedblock--argument"></a>
### Nested Schema for `argument`
Expand Down
147 changes: 147 additions & 0 deletions docs/resources/stream_on_table.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
---
page_title: "snowflake_stream_on_table Resource - terraform-provider-snowflake"
subcategory: ""
description: |-
Resource used to manage streams on tables. For more information, check stream documentation https://docs.snowflake.com/en/sql-reference/sql/create-stream.
---

!> **V1 release candidate** This resource was reworked and is a release candidate for the V1. We do not expect significant changes in it before the V1. We will welcome any feedback and adjust the resource if needed. Any errors reported will be resolved with a higher priority. We encourage checking this resource out before the V1 release. Please follow the [migration guide](https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/MIGRATION_GUIDE.md#v0960--v0970) to use it.

# snowflake_stream_on_table (Resource)

Resource used to manage streams on tables. For more information, check [stream documentation](https://docs.snowflake.com/en/sql-reference/sql/create-stream).

## Example Usage

```terraform
resource "snowflake_table" "table" {
database = "database"
schema = "schema"
name = "name"

column {
type = "NUMBER(38,0)"
name = "id"
}
}


# resource with more fields set
resource "snowflake_stream_on_table" "stream" {
name = "stream"
schema = "schema"
database = "database"

copy_grants = true
table = snowflake_table.table.fully_qualified_name
append_only = "true"
show_initial_rows = "true"

at {
statement = "8e5d0ca9-005e-44e6-b858-a8f5b37c5726"
}

comment = "A stream."
}
```
-> **Note** Instead of using fully_qualified_name, you can reference objects managed outside Terraform by constructing a correct ID, consult [identifiers guide](https://registry.terraform.io/providers/Snowflake-Labs/snowflake/latest/docs/guides/identifiers#new-computed-fully-qualified-name-field-in-resources).
<!-- TODO(SNOW-1634854): include an example showing both methods-->

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `database` (String) The database in which to create the stream. Due to technical limitations (read more [here](https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/docs/technical-documentation/identifiers_rework_design_decisions.md#known-limitations-and-identifier-recommendations)), avoid using the following characters: `|`, `.`, `(`, `)`, `"`
- `name` (String) Specifies the identifier for the stream; must be unique for the database and schema in which the stream is created. Due to technical limitations (read more [here](https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/docs/technical-documentation/identifiers_rework_design_decisions.md#known-limitations-and-identifier-recommendations)), avoid using the following characters: `|`, `.`, `(`, `)`, `"`
- `schema` (String) The schema in which to create the stream. Due to technical limitations (read more [here](https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/docs/technical-documentation/identifiers_rework_design_decisions.md#known-limitations-and-identifier-recommendations)), avoid using the following characters: `|`, `.`, `(`, `)`, `"`
- `table` (String) Specifies an identifier for the table the stream will monitor. Due to technical limitations (read more [here](https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/docs/technical-documentation/identifiers_rework_design_decisions.md#known-limitations-and-identifier-recommendations)), avoid using the following characters: `|`, `.`, `(`, `)`, `"`

### Optional

- `append_only` (String) Specifies whether this is an append-only stream. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.
- `at` (Block List, Max: 1) This field specifies that the request is inclusive of any changes made by a statement or transaction with a timestamp equal to the specified parameter. Due to Snowflake limitations, the provider does not detect external changes on this field. (see [below for nested schema](#nestedblock--at))
- `before` (Block List, Max: 1) This field specifies that the request refers to a point immediately preceding the specified parameter. This point in time is just before the statement, identified by its query ID, is completed. Due to Snowflake limitations, the provider does not detect external changes on this field. (see [below for nested schema](#nestedblock--before))
- `comment` (String) Specifies a comment for the stream.
- `copy_grants` (Boolean) Retains the access permissions from the original stream when a new stream is created using the OR REPLACE clause.
- `show_initial_rows` (String) Specifies whether to return all existing rows in the source table as row inserts the first time the stream is consumed. Available options are: "true" or "false". When the value is not set in the configuration the provider will put "default" there which means to use the Snowflake default for this value.

### Read-Only

- `describe_output` (List of Object) Outputs the result of `DESCRIBE STREAM` for the given stream. (see [below for nested schema](#nestedatt--describe_output))
- `fully_qualified_name` (String) Fully qualified name of the resource. For more information, see [object name resolution](https://docs.snowflake.com/en/sql-reference/name-resolution).
- `id` (String) The ID of this resource.
- `show_output` (List of Object) Outputs the result of `SHOW STREAMS` for the given stream. (see [below for nested schema](#nestedatt--show_output))

<a id="nestedblock--at"></a>
### Nested Schema for `at`

Optional:

- `offset` (String) Specifies the difference in seconds from the current time to use for Time Travel, in the form -N where N can be an integer or arithmetic expression (e.g. -120 is 120 seconds, -30*60 is 1800 seconds or 30 minutes).
- `statement` (String) Specifies the query ID of a statement to use as the reference point for Time Travel. This parameter supports any statement of one of the following types: DML (e.g. INSERT, UPDATE, DELETE), TCL (BEGIN, COMMIT transaction), SELECT.
- `stream` (String) Specifies the identifier (i.e. name) for an existing stream on the queried table or view. The current offset in the stream is used as the AT point in time for returning change data for the source object.
- `timestamp` (String) Specifies an exact date and time to use for Time Travel. The value must be explicitly cast to a TIMESTAMP, TIMESTAMP_LTZ, TIMESTAMP_NTZ, or TIMESTAMP_TZ data type.


<a id="nestedblock--before"></a>
### Nested Schema for `before`

Optional:

- `offset` (String) Specifies the difference in seconds from the current time to use for Time Travel, in the form -N where N can be an integer or arithmetic expression (e.g. -120 is 120 seconds, -30*60 is 1800 seconds or 30 minutes).
- `statement` (String) Specifies the query ID of a statement to use as the reference point for Time Travel. This parameter supports any statement of one of the following types: DML (e.g. INSERT, UPDATE, DELETE), TCL (BEGIN, COMMIT transaction), SELECT.
- `stream` (String) Specifies the identifier (i.e. name) for an existing stream on the queried table or view. The current offset in the stream is used as the AT point in time for returning change data for the source object.
- `timestamp` (String) Specifies an exact date and time to use for Time Travel. The value must be explicitly cast to a TIMESTAMP, TIMESTAMP_LTZ, TIMESTAMP_NTZ, or TIMESTAMP_TZ data type.


<a id="nestedatt--describe_output"></a>
### Nested Schema for `describe_output`

Read-Only:

- `base_tables` (List of String)
- `comment` (String)
- `created_on` (String)
- `database_name` (String)
- `invalid_reason` (String)
- `mode` (String)
- `name` (String)
- `owner` (String)
- `owner_role_type` (String)
- `schema_name` (String)
- `source_type` (String)
- `stale` (String)
- `stale_after` (String)
- `table_name` (String)
- `type` (String)


<a id="nestedatt--show_output"></a>
### Nested Schema for `show_output`

Read-Only:

- `base_tables` (List of String)
- `comment` (String)
- `created_on` (String)
- `database_name` (String)
- `invalid_reason` (String)
- `mode` (String)
- `name` (String)
- `owner` (String)
- `owner_role_type` (String)
- `schema_name` (String)
- `source_type` (String)
- `stale` (String)
- `stale_after` (String)
- `table_name` (String)
- `type` (String)

## Import

Import is supported using the following syntax:

```shell
terraform import snowflake_stream_on_table.example '"<database_name>"."<schema_name>"."<stream_name>"'
```
2 changes: 1 addition & 1 deletion docs/resources/view.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ Required:

Optional:

- `minutes` (Number) Specifies an interval (in minutes) of wait time inserted between runs of the data metric function. Conflicts with `using_cron`. Valid values are: `5` | `15` | `30` | `60` | `720` | `1440`. Due to Snowflake limitations, changes in this field is not managed by the provider. Please consider using [taint](https://developer.hashicorp.com/terraform/cli/commands/taint) command, `using_cron` field, or [replace_triggered_by](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#replace_triggered_by) metadata argument.
- `minutes` (Number) Specifies an interval (in minutes) of wait time inserted between runs of the data metric function. Conflicts with `using_cron`. Valid values are: `5` | `15` | `30` | `60` | `720` | `1440`. Due to Snowflake limitations, changes in this field are not managed by the provider. Please consider using [taint](https://developer.hashicorp.com/terraform/cli/commands/taint) command, `using_cron` field, or [replace_triggered_by](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#replace_triggered_by) metadata argument.
- `using_cron` (String) Specifies a cron expression and time zone for periodically running the data metric function. Supports a subset of standard cron utility syntax. Conflicts with `minutes`.


Expand Down
1 change: 1 addition & 0 deletions examples/resources/snowflake_stream_on_table/import.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
terraform import snowflake_stream_on_table.example '"<database_name>"."<schema_name>"."<stream_name>"'
29 changes: 29 additions & 0 deletions examples/resources/snowflake_stream_on_table/resource.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
resource "snowflake_table" "table" {
database = "database"
schema = "schema"
name = "name"

column {
type = "NUMBER(38,0)"
name = "id"
}
}


# resource with more fields set
resource "snowflake_stream_on_table" "stream" {
name = "stream"
schema = "schema"
database = "database"

copy_grants = true
table = snowflake_table.table.fully_qualified_name
append_only = "true"
show_initial_rows = "true"

at {
statement = "8e5d0ca9-005e-44e6-b858-a8f5b37c5726"
}

comment = "A stream."
}
3 changes: 2 additions & 1 deletion pkg/acceptance/bettertestspoc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -344,8 +344,9 @@ func (w *WarehouseDatasourceShowOutputAssert) IsEmpty() {
- Omit computed fields in the model (like FullyQualifiedName), because it doesn't make sense to set them
- There's an error when generating models, steps to reproduce:
- Go to view resource code and change `data_metric_function` field to `testing` and make it required
- During the generation, the following error appears: mixed named and unnamed parameters.
- During the generation, the following error appears: mixed named and unnamed parameters.
It's a golang error indicating that the parameter has both unnamed and named parameters in function (e.g. `func(abc string, int)`).
The error is a result of both things:
1. Lists of objects are partially generated, and only parameter name is generated in some functions (the type has to be added manually).
2. `testing` is a package name that makes Go think that we want to have unnamed parameter there, but we just didn't generate the type for that field in the function argument.
- generate assertions checking that time is not empty - we often do not compare time fields by value, but check if they are set
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,50 @@ func (s *StreamAssert) HasStageName(expected string) *StreamAssert {
})
return s
}

func (s *StreamAssert) HasSourceType(expected sdk.StreamSourceType) *StreamAssert {
s.AddAssertion(func(t *testing.T, o *sdk.Stream) error {
t.Helper()
if o.SourceType == nil {
return fmt.Errorf("expected source type to have value; got: nil")
}
if *o.SourceType != expected {
return fmt.Errorf("expected source type: %v; got: %v", expected, *o.SourceType)
}
return nil
})
return s
}

func (s *StreamAssert) HasBaseTables(expected []sdk.SchemaObjectIdentifier) *StreamAssert {
s.AddAssertion(func(t *testing.T, o *sdk.Stream) error {
t.Helper()
if o.BaseTables == nil {
return fmt.Errorf("expected base tables to have value; got: nil")
}
if len(o.BaseTables) != len(expected) {
return fmt.Errorf("expected base tables length: %v; got: %v", len(expected), len(o.BaseTables))
}
for i := range o.BaseTables {
if o.BaseTables[i].FullyQualifiedName() != expected[i].FullyQualifiedName() {
sfc-gh-jcieslak marked this conversation as resolved.
Show resolved Hide resolved
return fmt.Errorf("expected base table id: %v; got: %v", expected[i], o.BaseTables[i])
}
}
return nil
})
return s
}

func (s *StreamAssert) HasMode(expected sdk.StreamMode) *StreamAssert {
s.AddAssertion(func(t *testing.T, o *sdk.Stream) error {
t.Helper()
if o.Mode == nil {
return fmt.Errorf("expected mode to have value; got: nil")
}
if *o.Mode != expected {
return fmt.Errorf("expected mode: %v; got: %v", expected, *o.Mode)
}
return nil
})
return s
}

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,8 @@ var allResourceSchemaDefs = []ResourceSchemaDef{
name: "MaskingPolicy",
schema: resources.MaskingPolicy().Schema,
},
{
name: "StreamOnTable",
schema: resources.StreamOnTable().Schema,
},
}
Loading
Loading