-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/aws: Changing count of instances with volume attachments causes all attachments to be forced to new resources #83
Comments
Curious if there are any plans to address this? We've hit the issue as well and using the |
Any estimations regarding this fix? |
Hi everyone! 👋 Sorry you're running into trouble here. To briefly provide an update about this behavior and hopefully fix for it, there are some upcoming handling improvements of the configuration language used by Terraform that will (among many things) mitigate some of the issues when working with count > 1 resources. This work is occurring occuring right now, but I cannot give an exact date for it yet. These improvements will be announced most likely with a Terraform core release (e.g. 0.12 or higher). In the meantime, sorry again you're running into trouble with this and we hope this will get resolved as part of that work. |
Thanks @bflad for the update! |
Here's a fully reproducible configuration of this issue on Terraform 0.11.10. Now that the Terraform 0.12 alphas are coming out, we can use this to verify whether or not its still an issue. terraform {
required_version = "0.11.10"
}
provider "aws" {
region = "us-east-1"
version = "1.43.0"
}
variable "count" {
# bug encountered when updated to 3
default = 1
}
data "aws_ami" "test" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn-ami-hvm-*-x86_64-gp2"]
}
}
data "aws_availability_zones" "available" {}
data "aws_subnet" "test" {
availability_zone = "${data.aws_availability_zones.available.names[0]}"
default_for_az = true
}
resource "aws_instance" "test" {
count = "${var.count}"
ami = "${data.aws_ami.test.id}"
instance_type = "t2.medium"
subnet_id = "${data.aws_subnet.test.id}"
}
resource "aws_ebs_volume" "test" {
count = "${var.count}"
availability_zone = "${element(aws_instance.test.*.availability_zone, count.index)}"
size = "10"
type = "gp2"
}
resource "aws_volume_attachment" "test" {
count = "${var.count}"
device_name = "/dev/xvdg"
instance_id = "${element(aws_instance.test.*.id, count.index)}"
volume_id = "${element(aws_ebs_volume.test.*.id, count.index)}"
} |
Issue stilll there with:
|
Hi folks 👋 This issue is resolved in Terraform 0.12.6 and later, which supports new functionality in the configuration language aimed at solving problems like these. The new resource-level If you're looking for general assistance with how to implement |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
This issue was originally opened by @SpencerBrown as hashicorp/terraform#5240. It was migrated here as part of the provider split. The original body of the issue is below.
here's the scenario: (using latest 0.6.11)
have a cluster of aws_instance with a count
each instance has an aws_ebs_volume and a corresponding aws_volume_attachment, each using the same count (obviously)
all is well for the initial plan/apply
Now, increase the count by 1. Expect to simply add another instance with its ebs volume and attachment.
Instead, Terraform wants to force new resource for ALL the volume attachments. not good!
here's an example (ive removed some of the irrelevant detail so this might now work as is)
if you plan/apply this, then change the 5's to 6's and re-plan, you get something that wants to force a new resource fo the first 5 volume attachments, because it thinks the instance_id and volume_id have changed (which they have not, obviously).
( I unfortunately did not save the actual log.)
This of course fails, because the volumes are still there and attached and Terraform cannot re-attach them.
My only recourse was to taint the existing instances and rebuild them all. This is bad, as I would like to be able to non-disruptively add a new node to my Kubernetes cluster using Terraform. I used to be able to do this before I had these volume attachments on each node.
The text was updated successfully, but these errors were encountered: