-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Cannot configure PUBLIC schema of new database #2826
Comments
Hey @wesleyhillyext. Thanks for reaching out to us. We have planned to discuss this issue with the schema resource rework as part of https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/ROADMAP.md#preparing-essential-ga-objects-for-the-provider-v1. A few comments from my side:
I just want to point out, that there is also the possibility that we will decide not to handle the |
Thanks for your reply @sfc-gh-asawicki, good to know this was on your radar. I've been thinking about some of your comments and just had a question on one:
From my understanding, using the Whereas managed-access schemas seem more about centralized vs decentralized access management (centralized is the approach we use). So I didn't understand why our use of managed-access schemas might clash with the point of the
I think any of my proposed solution, a The documentation says that |
My only point here was, that currently managing
Yeah, even the name scares potential users away, because this was the initial plan. However, we have decided to stick with this resource even after V1, maybe with a bit "safer" implementation. Docs were not adjusted because the resource is still dangerous if used incorrectly but you can see this resource on the list linked here: https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/ROADMAP.md#preparing-essential-ga-objects-for-the-provider-v1. |
Hi @wesleyhillyext 👋 |
Thanks for the update @sfc-gh-jmichalak @sfc-gh-asawicki. That will resolve the main operational issue with declaring the Normally, creating resources in terraform is always a non-destructive operation. But Safety is one of the reasons I ended up proposing the solution I originally proposed.
I didn't really dive into this philosophy earlier on, but I may as well add my two cents now: SQL is not Terraform. SQL DDL is an imperative language, whereas terraform is a declarative language; it seems entirely natural to me that there would be some "impedance mismatch" between the declarative and imperative paradigms, such that what makes sense as one declarative resource doesn't always align 1-1 with a single imperative SQL statement. Such alignment might be a fine consideration when all else is equal, but I would caution against this becoming an overriding consideration. |
Hey @wesleyhillyext. Thanks for your thoughts. TBH for me it's not a big difference if we add a boolean option to the
Fortunately:
However, your concerns are valid. Ultimately, create or alter schema (like the one that is already in preview for tables: https://docs.snowflake.com/en/sql-reference/sql/create-table#create-or-alter-table) would be a better solution. |
<!-- Feel free to delete comments as you fill this in --> - during create operation, detect if PUBLIC schema has already been created, and instead of using OR REPLACE, redirect the flow to `UpdateContextSchema` (note that `name` change needs to be excluded to avoid unnecessary rename) - add a check to acceptance tests to ensure the schema hasn't been dropped - describe new behavior in the migration guide for v0.94.1 and add a note for v0.94.0 <!-- summary of changes --> ## Test Plan <!-- detail ways in which this PR has been tested or needs to be tested --> * [x] acceptance tests ## References <!-- issues documentation links, etc --> References #2826, specifically comment #2826 (comment)
Hi @wesleyhillyext 👋 This has been fixed in v0.94.1 release. Now, we use |
Thanks for the update @sfc-gh-jmichalak. Unfortunately, that approach breaks in our case. Using I originally proposed the "drop public schema immediately after a create" solution after pretty careful consideration of all these kinds of alternatives, and I still think it's the only approach which doesn't introduce any of its own problems. |
Hey @wesleyhillyext. We want to avoid dropping the PUBLIC schema programmatically (mainly because of the reasons discussed above), so we explored the non-dropping solutions. Ultimately, we will replace it with The role requirement is a Snowflake limitation, not the provider's. We try to align every resource with the Snowflake role requirements. You can report it to Snowflake support (but I guess you have already done so). I think it is still possible to handle your case in the provider:
Will it work for you? |
If the database resource just successfully executed a
This is effectively equivalent to the
Sorry, I'm not quite sure what you were trying to get at here. It is already possible to achieve the desired resource state in Snowflake without using the
Personally, I think the goal of being able to apply terraform with roles other than |
TL;DR We will add My answers below (not to leave the questions unanswered).
The result is the same, but it differs in an important detail: it's not hidden from the provider's user like the
This:
We do not have a principle of
And this is exactly what I proposed:
|
## Changes - #2978 (adjust migration guide) - Fix database tests (remove tests for state upgraders, because they were incorrect and document step-by-step process for manual testing; I went through all of it) - Introduce a new parameter to address #2826 and test it - added a drop with retries just to be sure, but I don't think it should ever fail other than some kind of strange networking error between database create and drop schema - I opted to drop the schema after setting id, so if we manage to create a database, but not drop the schema, the provider will print out a warning message that the public schema couldn't be dropped and it should be done manually. The other approaches I considered: - Do it before setting ID, but that would mean the Terraform running create again and failing on CREATE DATABASE (because it's already there). - Do it like right now, but make sure the public schema is gone in the alter, but I didn't go for that, because it could lead to further complications with (what is considered "after create"; after successful create? how would we keep the state of create status? etc.). --------- Co-authored-by: Jakub Michalak <jakub.michalak@snowflake.com>
Thanks; that |
Hey @wesleyhillyext. We released v0.95 yesterday containing drop_public_schema_on_creation attribute. Please check and let us know if this works for you. |
Thanks very much for implementing that! The docs and commit look good to me. We're currently on version 0.88, and we're going to have to schedule a project to get through the many changes made to the provider to bring us up to 0.95. In the meantime, I think you can close this issue. |
Terraform CLI Version
1.7.4
Terraform Provider Version
0.88.0
Terraform Configuration
Category
category:resource
Object type(s)
No response
Expected Behavior
When applying these resources for the first time, the database is created by the
snowflake_database
resource. Then thePUBLIC
schema for that database is created by thesnowflake_schema
resource.Actual Behavior
The database is created by the
snowflake_database
resource. Thesnowflake_schema
resource then fails to apply when it runsCREATE SCHEMA PUBLIC
, which fails because the schema already exists, since in Snowflake aCREATE DATABASE
automatically creates aPUBLIC
schema with default properties.Steps to Reproduce
terraform apply
the example configuration when the resources don't already exist in the TF state or in Snowflake.How much impact is this issue causing?
Medium
Logs
No response
Additional Information
This reopens #306 with some additional thinking.
This has been a thorn in our side when creating new databases, as we either want to have an explicit
snowflake_schema
block for thePUBLIC
schema, or in some cases don't want aPUBLIC
schema for a specific database.Proposed solution: After considerable thinking on this issue, I've come to believe there's only one workable solution: add a boolean option to the
snowflake_database
resource which, if enabled, causes it to executeDROP SCHEMA PUBLIC
if it executes aCREATE DATABASE
.Alternatives considered and dismissed:
PUBLIC
schema when thesnowflake_schema
resource fails, then apply again.PUBLIC
schema in thesnowflake_database
resource.PUBLIC
schema.import
blocks to automatically import thePUBLIC
schema.PUBLIC
schema.import
would have to be executed after thesnowflake_database
resource is created, and it's not clear that Terraform allows specifying this kind of dependency.terraform_schema
resource will attempt to runALTER SCHEMA PUBLIC ENABLE MANAGED ACCESS
, which fails unless you have theMANAGE GRANTS
privilege. A solution should require nothing but theCREATE DATABASE
privilege, asMANAGE GRANTS
is the most powerful privilege (you can use it to grant yourselfACCOUNTADMIN
), so we don't have that privilege when applying most terraform modules.CREATE DATABASE
statement to allow aWITHOUT PUBLIC SCHEMA
option.CREATE OR ALTER
. And aWITHOUT PUBLIC SCHEMA
option would be a little cleaner than having to runDROP SCHEMA PUBLIC
.It could be argued that the proposed "drop-public" behavior of the
snowflake_database
resource is the logical default behavior. Or the only possible behavior – i.e. always require a separatesnowflake_schema
resource forPUBLIC
. But this way it wouldn't be backwards-compatible, which is why I suggested making this behavior opt-in. That said, the provider is currently making many other breaking changes with the goal of cleaning it up before a v1 release, so perhaps you might like to make this behavior the default/only behavior after all.Would you like to implement a fix?
The text was updated successfully, but these errors were encountered: