Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: Changing count of instances with volume attachments causes all attachments to be forced to new resources #83

Closed
hashibot opened this issue Jun 13, 2017 · 8 comments
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. upstream-terraform Addresses functionality related to the Terraform core binary.

Comments

@hashibot
Copy link

This issue was originally opened by @SpencerBrown as hashicorp/terraform#5240. It was migrated here as part of the provider split. The original body of the issue is below.


here's the scenario: (using latest 0.6.11)

have a cluster of aws_instance with a count

each instance has an aws_ebs_volume and a corresponding aws_volume_attachment, each using the same count (obviously)

all is well for the initial plan/apply

Now, increase the count by 1. Expect to simply add another instance with its ebs volume and attachment.

Instead, Terraform wants to force new resource for ALL the volume attachments. not good!

here's an example (ive removed some of the irrelevant detail so this might now work as is)

resource "aws_instance" "kube_worker" {
  count = "5"
  ami = "ami-something"
  instance_type = "t2.micro"
  availability_zone = "us-west-2a"
  subnet_id = "sn-something"
}

resource "aws_ebs_volume" "docker" {
  count = "5"
  availability_zone = "us-west-2a"
  type = "gp2"
  size = "10"
}

resource "aws_volume_attachment" "docker" {
  count = "5"
  device_name = "/dev/xvdd"
  volume_id = "${element(aws_ebs_volume.docker.*.id, count.index)}"
  instance_id = "${element(aws_instance.kube_worker.*.id, count.index)}"
}

if you plan/apply this, then change the 5's to 6's and re-plan, you get something that wants to force a new resource fo the first 5 volume attachments, because it thinks the instance_id and volume_id have changed (which they have not, obviously).

( I unfortunately did not save the actual log.)

This of course fails, because the volumes are still there and attached and Terraform cannot re-attach them.

My only recourse was to taint the existing instances and rebuild them all. This is bad, as I would like to be able to non-disruptively add a new node to my Kubernetes cluster using Terraform. I used to be able to do this before I had these volume attachments on each node.

@hashibot hashibot added the bug Addresses a defect in current functionality. label Jun 13, 2017
@jacobrandall
Copy link

Curious if there are any plans to address this? We've hit the issue as well and using the ignore_changes work around in the meantime but would love to see this resolved if possible. Thanks!

@radeksimko radeksimko added the service/ec2 Issues and PRs that pertain to the ec2 service. label Jan 25, 2018
@sporokh
Copy link

sporokh commented Feb 23, 2018

Any estimations regarding this fix?

@bflad
Copy link
Contributor

bflad commented Feb 23, 2018

Hi everyone! 👋 Sorry you're running into trouble here.

To briefly provide an update about this behavior and hopefully fix for it, there are some upcoming handling improvements of the configuration language used by Terraform that will (among many things) mitigate some of the issues when working with count > 1 resources. This work is occurring occuring right now, but I cannot give an exact date for it yet. These improvements will be announced most likely with a Terraform core release (e.g. 0.12 or higher).

In the meantime, sorry again you're running into trouble with this and we hope this will get resolved as part of that work.

@sporokh
Copy link

sporokh commented Feb 23, 2018

Thanks @bflad for the update!

@bflad
Copy link
Contributor

bflad commented Nov 8, 2018

Here's a fully reproducible configuration of this issue on Terraform 0.11.10. Now that the Terraform 0.12 alphas are coming out, we can use this to verify whether or not its still an issue.

terraform {
  required_version = "0.11.10"
}

provider "aws" {
  region  = "us-east-1"
  version = "1.43.0"
}

variable "count" {
  # bug encountered when updated to 3
  default = 1
}

data "aws_ami" "test" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn-ami-hvm-*-x86_64-gp2"]
  }
}

data "aws_availability_zones" "available" {}

data "aws_subnet" "test" {
  availability_zone = "${data.aws_availability_zones.available.names[0]}"
  default_for_az    = true
}

resource "aws_instance" "test" {
  count = "${var.count}"

  ami           = "${data.aws_ami.test.id}"
  instance_type = "t2.medium"
  subnet_id     = "${data.aws_subnet.test.id}"
}

resource "aws_ebs_volume" "test" {
  count = "${var.count}"

  availability_zone = "${element(aws_instance.test.*.availability_zone, count.index)}"
  size              = "10"
  type              = "gp2"
}

resource "aws_volume_attachment" "test" {
  count = "${var.count}"

  device_name = "/dev/xvdg"
  instance_id = "${element(aws_instance.test.*.id, count.index)}"
  volume_id   = "${element(aws_ebs_volume.test.*.id, count.index)}"
}

@xakraz
Copy link

xakraz commented Dec 6, 2018

Issue stilll there with:

  • tf 0.11.10
  • aws-provider 1.51.0

@bflad
Copy link
Contributor

bflad commented Jul 7, 2020

Hi folks 👋 This issue is resolved in Terraform 0.12.6 and later, which supports new functionality in the configuration language aimed at solving problems like these. The new resource-level for_each argument can be used so resources are indexed in the Terraform state based on a string map or set, rather than the simple numeric list with the resource-level count argument. Resources switched to for_each over count will no longer have issues with removing elements in the middle of a list or general rearranging of elements as the resource index keys are stable.

If you're looking for general assistance with how to implement for_each in this situation, please note that we use GitHub issues in this repository for tracking bugs and enhancements with the Terraform AWS Provider codebase rather than for questions. While we may be able to help with certain simple problems here it's generally better to use the community forums where there are far more people ready to help, whereas the GitHub issues here are generally monitored only by a few maintainers and dedicated community members interested in code development of the Terraform AWS Provider itself.

@bflad bflad closed this as completed Jul 7, 2020
@ghost
Copy link

ghost commented Aug 6, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Aug 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. upstream-terraform Addresses functionality related to the Terraform core binary.
Projects
None yet
Development

No branches or pull requests

6 participants