Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_bigtable_instance: cluster destroyed when adding replicated cluster #5573

Closed
jehiah opened this issue Feb 3, 2020 · 2 comments
Closed
Assignees
Labels

Comments

@jehiah
Copy link

jehiah commented Feb 3, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

$ terraform -v
Terraform v0.12.6
+ provider.google v3.1.0
+ provider.google-beta v3.1.0

Affected Resource(s)

  • google_bigtable_instance

Terraform Configuration Files

Initial Terraform state

resource "google_bigtable_instance" "mybigtabledb" {
   name = "mybigtabledb"
   cluster {
     cluster_id = "mydb-us-central1-a"
     zone       = "us-central1-a"
     num_nodes  = 3
   }
   lifecycle {
     # num_nodes managed by bigtable_autoscale once cluster is provisioned
     ignore_changes = ["cluster[0].num_nodes"]
   }
}

When changing to the following the cluster is destroyed instead of just getting having an additional replica cluster provisioned

resource "google_bigtable_instance" "mybigtabledb" {
   name = "mybigtabledb"
   cluster {
     cluster_id = "mydb-us-central1-a"
     zone       = "us-central1-a"
     num_nodes  = 3
   }
   cluster {
     cluster_id = "mydb-us-east4-a"
     zone       = "us-east4-a"
     num_nodes  = 3
   }
   lifecycle {
     # num_nodes managed by bigtable_autoscale once cluster is provisioned
     ignore_changes = ["cluster[0].num_nodes", "cluster[1].num_nodes"]
   }
 }

Debug Output

https://gist.github.com/jehiah/f51b7d767dda78879e84a0167105110f

Expected Behavior

A second cluster should be added to the existing instance.

Actual Behavior

Existing cluster (with data) unexpectedly destroyed.

google_bigtable_instance.mybigtabledb: Destroying... [id=projects/$PROJECT/instances/mybigtabledb]
google_bigtable_instance.mybigtabledb: Destruction complete after 1s
google_bigtable_instance.mybigtabledb: Creating...
google_bigtable_instance.mybigtabledb: Still creating... [10s elapsed]
// snip
google_bigtable_instance.mybigtabledb: Still creating... [2m30s elapsed]
google_bigtable_instance.mybigtabledb: Creation complete after 2m35s [id=projects/$PROJECT/instances/mybigtabledb]

Steps to Reproduce

  1. terraform apply creating a single cluster instance
  2. terraform apply adding a second cluster

Related Issues

@ghost ghost added the bug label Feb 3, 2020
@venkykuberan venkykuberan self-assigned this Feb 4, 2020
@venkykuberan
Copy link
Contributor

@jehiah this issue has been taken from provider version 3.5.0 and later. Adding a new cluster will be in-place. I have attached the sample plan out below.

data.google_client_openid_userinfo.me: Refreshing state...
google_bigtable_instance.bigtable-ah: Refreshing state... [id=projects/venky-external-org/i
nstances/bigtable-ah]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # google_bigtable_instance.bigtable-ah will be updated in-place
  ~ resource "google_bigtable_instance" "bigtable-ah" {
        display_name  = "bigtable-ah"
        id            = "projects/venky-external-org/instances/bigtable-ah"
        instance_type = "PRODUCTION"
        name          = "bigtable-ah"
        project       = "venky-external-org"

        cluster {
            cluster_id   = "bigtable-main-cluster-2"
            num_nodes    = 3
            storage_type = "HDD"
            zone         = "us-central1-a"
        }
      + cluster {
          + cluster_id   = "bigtable-cluster-replica-2"
          + num_nodes    = 3
          + storage_type = "HDD"
          + zone         = "us-east4-a"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------

closing this issue, please reopen if you still the see the same behavior with the recommended provider versions.

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants