-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_msk_cluster provisioned_throughput block has multiple issues #26031
Comments
Hello and thank you for reporting an issue(s)! I’m having a hard time understanding the intended outcome, and how that intended outcome is not being met. Would it be possible for you to provide some clarity in the usecase? This will help us build an acceptance test that fails, and then we can use that for remediation. An example:
|
ideally the following 3 configs would all behave the same way. <no config> provisioned_throughput {
enabled = false
} and provisioned_throughput {
enabled = false
volume_throughput = X
} All three would force disablement of provisioned throughput and subsequent terraform plans using any of the three configurations when the throughput is disabled on the MSK would result in no changes necessary Right now, <no config> seems to simply not make any changes and for historical perspective of people who have enabled the config on clusters without TF, having disable their config can be rude. So if that one doesnt change, I can live with it. However, the second should not constantly result in terraform plan showing needed changes. And ideally the third would know to filter out the throughput into the API request so that people can simply enable/disable throughput via basic code of |
My team is also facing this issue. We support enabling provisioned_throughput but set provisioned_throughput.enabled to false by default with no additional parameters specified. so our default configuration looks like example 2 outlined in the above comment by @ryan-dyer-sp :
This default configuration seems to work fine with new clusters which are stood up after we added support in our internal msk module for setting the provisioned_throughput block. But using this newer version of our msk module against an existing cluster results in never ending drift since terraform is expecting the above configuration block to be specified on the target cluster and it is missing after every apply. We can address this in our msk module by reverting to a version of the module before we supported this parameter but I would like to understand how the provider expects a disabled provisioned_throughput configuration block to be constructed so it works with new and existing clusters. Could be the answer is it doesn't support it at this time and if that is the case then I would like to understand when support of this use case will be added. |
Starting from aws provider 5.0, this bug not ignored as it was in was provider v4, but produce error below ~ update in-place
Terraform will perform the following actions:
# aws_msk_cluster.msk will be updated in-place
~ resource "aws_msk_cluster" "msk" {
id = "arn:aws:kafka:us-east-1:433464930109:cluster/spoton-msk/dafaafc8-297a-410c-8acc-7903dfd668a7-12"
tags = {}
# (10 unchanged attributes hidden)
~ broker_node_group_info {
# (4 unchanged attributes hidden)
~ storage_info {
~ ebs_storage_info {
# (1 unchanged attribute hidden)
- provisioned_throughput {
- enabled = false -> null
- volume_throughput = 0 -> null
}
}
}
# (1 unchanged block hidden)
}
# (4 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. aws_msk_cluster.msk: Modifying... [id=arn:aws:kafka:us-east-1:433464930109:cluster/spoton-msk/dafaafc8-297a-410c-8acc-7903dfd668a7-12]
╷
│ Error: updating MSK Cluster (arn:aws:kafka:us-east-1:433464930109:cluster/spoton-msk/dafaafc8-297a-410c-8acc-7903dfd668a7-12) broker storage: BadRequestException: The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again.
│ {
│ RespMetadata: {
│ StatusCode: 400,
│ RequestID: "469ec42e-adb6-4733-871e-0a0c10b37939"
│ },
│ Message_: "The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again."
│ }
│
│ with aws_msk_cluster.msk,
│ on msk.tf line 100, in resource "aws_msk_cluster" "msk":
│ 100: resource "aws_msk_cluster" "msk" {
│
╵ I tried manually fix state-file, rm and import resource, but nothing help |
This issue with
|
YMMV but I got this working without
Note that for me it didn't work with |
I also hit this issue when introducing this feature. I ended up with following variable declaration:
and the code:
In order to smoothly migrate existing clusters, the default variable value is null, I tried object with enabled set to false but it fails at runtime. The nasty thing is to explain valid transitions for "provisioned_storage_volume_throughput". Following transitions work:
Other transitions will fail, in particular:
I could probably experiment with pre and postconditions to improve it, but I am on Terraform 1.0.x at the moment so this is not yet an option. All in all, I agree that the provider should be more forgiving and treat null in the same way as {enabled = false, throughput = null}, this would make it possible to simplify the migration and minimise the valid states/transitions. P.S. The reason why I am setting throughput to 0 when disabling is to keep Terratest code simple, from TF perspective it could be null or missing when enabled is set to false. |
My workaround for this was to increase Update: Unfortunately, despite being able to apply the |
we are seeing this in our plans and after doing a search looks like there is no fix yet but ignoring changes? |
Hello, I am stuck on this issue and I tried following but couldn't make it work.
|
Community Note
Terraform CLI and Terraform AWS Provider Version
Terraform v1.1.4
on linux_amd64
Affected Resource(s)
Terraform Configuration Files
Actual Behavior
The existing behavior seems to only support two configurations.
Attempts to disable provisioned throughput require that the block no longer contain the volume_throughput field; thus the weird double dynamic blocks in example above.
Once disabled terraform plan constantly detects a chance in the plan as the p_t block doesnt appear to exist as part of the state. How simple removal of the block is not sufficient to disable p_t in the case that is already enabled on the cluster.
Steps to Reproduce
Important Factoids
References
The text was updated successfully, but these errors were encountered: