You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.
If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.
@jehiah this issue has been taken from provider version 3.5.0 and later. Adding a new cluster will be in-place. I have attached the sample plan out below.
data.google_client_openid_userinfo.me: Refreshing state...
google_bigtable_instance.bigtable-ah: Refreshing state... [id=projects/venky-external-org/i
nstances/bigtable-ah]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_bigtable_instance.bigtable-ah will be updated in-place
~ resource "google_bigtable_instance" "bigtable-ah" {
display_name = "bigtable-ah"
id = "projects/venky-external-org/instances/bigtable-ah"
instance_type = "PRODUCTION"
name = "bigtable-ah"
project = "venky-external-org"
cluster {
cluster_id = "bigtable-main-cluster-2"
num_nodes = 3
storage_type = "HDD"
zone = "us-central1-a"
}
+ cluster {
+ cluster_id = "bigtable-cluster-replica-2"
+ num_nodes = 3
+ storage_type = "HDD"
+ zone = "us-east4-a"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
closing this issue, please reopen if you still the see the same behavior with the recommended provider versions.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!
ghost
locked and limited conversation to collaborators
Mar 28, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Affected Resource(s)
Terraform Configuration Files
Initial Terraform state
When changing to the following the cluster is destroyed instead of just getting having an additional replica cluster provisioned
Debug Output
https://gist.github.com/jehiah/f51b7d767dda78879e84a0167105110f
Expected Behavior
A second cluster should be added to the existing instance.
Actual Behavior
Existing cluster (with data) unexpectedly destroyed.
Steps to Reproduce
terraform apply
creating a single cluster instanceterraform apply
adding a second clusterRelated Issues
The text was updated successfully, but these errors were encountered: