Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EC2 instances with EBS volumes destroyed & recreated on each apply #72

Closed
hashibot opened this issue Jun 13, 2017 · 23 comments
Closed

EC2 instances with EBS volumes destroyed & recreated on each apply #72

hashibot opened this issue Jun 13, 2017 · 23 comments
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. stale Old or inactive issues managed by automation, if no further action taken these will get closed.

Comments

@hashibot
Copy link

This issue was originally opened by @bwhaley as hashicorp/terraform#5006. It was migrated here as part of the provider split. The original body of the issue is below.


Running v0.6.11, I noticed that instances with an ebs_block_device are recreated every time I run terraform apply, even if there were no relevant changes. The block looks like this:

ebs_block_device {
    device_name  = "${var.es_ebs_vol.device_name}"
    volume_type = "${var.es_ebs_vol.type}"
    volume_size = "${var.es_ebs_vol.size}"
}

This seems similar to #913 but that was resolved some time ago. Any ideas?

@hashibot hashibot added the bug Addresses a defect in current functionality. label Jun 13, 2017
@hashibot
Copy link
Author

This comment was originally opened by @tiyberius as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


+1

Having the same issue as @bwhaley on 0.6.11. If it helps, I had the issue on 0.6.9 as well. I was hoping that an upgrade to 0.6.11 would fix it but it has not :(

@hashibot
Copy link
Author

This comment was originally opened by @davedash as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


I'm trying to figure this out too in #68.

Does your AMI specify an EBS snapshot to mount as a root device? This is my problem with trying to launch and ECS container.

@hashibot
Copy link
Author

This comment was originally opened by @bwhaley as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


Yes - but isn't the root volume for all EBS-backed AMIs started from an EBS snapshot?

@hashibot
Copy link
Author

This comment was originally opened by @davedash as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


@bwhaley I guess that's the case.

So this is only happening on my t2 based instances that happen to be running Amazon Linux. This might just be a coincidence.

@hashibot
Copy link
Author

This comment was originally opened by @bwhaley as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


Interesting - in my case it's also Amazon Linux on T2 instance types.

@hashibot
Copy link
Author

This comment was originally opened by @octalthorpe as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


***** Forget that need to use "root_block_device" my mistake
+1
Seeing the same issue here across more than just t2 instances.

ebs_block_device {
    device_name = "/dev/xvda"
    volume_size = 128
  }

Same behaviour on 0.6.9 thru 0.6.12, possible change on the AWS side?

@hashibot
Copy link
Author

This comment was originally opened by @eedwardsdisco as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


+1 I'm hitting this with 0.6.12

Launching an AMI that's a t2.micro with 2 EBS volumes, created using Packer

I specify a "root_block_device" and "data_block_device" mapping in the terraform template.

It's causing it to mark the data volume as needing to be re-created every time.

@hashibot
Copy link
Author

This comment was originally opened by @Sreeramk as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


I am running into a similar issue. In the plan it keeps pointing to the following.
Couple of observations: I am not using iops. Yet it seems to compute iops
I did not change the delete_on_terminate - yet it thinks there is a change
ebs_block_device.3796809015.delete_on_termination:* "true" => "1"* (forces new resource)
ebs_block_device.3796809015.device_name: "/dev/xvdl" => "/dev/xvdl" (forces new resource)
ebs_block_device.3796809015.encrypted: "false" => ""
ebs_block_device.3796809015.iops: "150" => ""
ebs_block_device.3796809015.snapshot_id: "" => "" (forces new resource)
ebs_block_device.3796809015.volume_size: "50" => "50" (forces new resource)
ebs_block_device.3796809015.volume_type: "gp2" => "gp2" (forces new resource)

@hashibot
Copy link
Author

This comment was originally opened by @serdardalgic as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


I'm also hitting the same issue here:

> terraform plan output
...
    ami:                                               "ami-d3a04fbc" => "ami-d3a04fbc"
    associate_public_ip_address:                       "false" => "false"
    availability_zone:                                 "eu-central-1a" => "<computed>"
    ebs_block_device.#:                                "3" => "3"
    ebs_block_device.1376874904.delete_on_termination: "true" => "false"
    ebs_block_device.1376874904.device_name:           "/dev/xvdf" => ""
    ebs_block_device.1494882292.delete_on_termination: "true" => "false"
    ebs_block_device.1494882292.device_name:           "/dev/xvdh" => ""
    ebs_block_device.1712666200.delete_on_termination: "true" => "false"
    ebs_block_device.1712666200.device_name:           "/dev/xvdg" => ""
    ebs_block_device.3846643179.delete_on_termination: "" => "true" (forces new resource)
    ebs_block_device.3846643179.device_name:           "" => "/dev/xvdh" (forces new resource)
    ebs_block_device.3846643179.encrypted:             "" => "<computed>" (forces new resource)
    ebs_block_device.3846643179.iops:                  "" => "100" (forces new resource)
    ebs_block_device.3846643179.snapshot_id:           "" => "<computed>" (forces new resource)
    ebs_block_device.3846643179.volume_size:           "" => "10" (forces new resource)
    ebs_block_device.3846643179.volume_type:           "" => "io1" (forces new resource)
    ebs_block_device.3994770134.delete_on_termination: "" => "true" (forces new resource)
    ebs_block_device.3994770134.device_name:           "" => "/dev/xvdg" (forces new resource)
    ebs_block_device.3994770134.encrypted:             "" => "<computed>" (forces new resource)
    ebs_block_device.3994770134.iops:                  "" => "250" (forces new resource)
    ebs_block_device.3994770134.snapshot_id:           "" => "<computed>" (forces new resource)
    ebs_block_device.3994770134.volume_size:           "" => "25" (forces new resource)
    ebs_block_device.3994770134.volume_type:           "" => "io1" (forces new resource)
    ebs_block_device.4023988449.delete_on_termination: "" => "true" (forces new resource)
    ebs_block_device.4023988449.device_name:           "" => "/dev/xvdf" (forces new resource)
    ebs_block_device.4023988449.encrypted:             "" => "<computed>" (forces new resource)
    ebs_block_device.4023988449.iops:                  "" => "2000" (forces new resource)
    ebs_block_device.4023988449.snapshot_id:           "" => "<computed>" (forces new resource)
    ebs_block_device.4023988449.volume_size:           "" => "800" (forces new resource)
    ebs_block_device.4023988449.volume_type:           "" => "io1" (forces new resource)
...
    root_block_device.#:                               "1" => "<computed>"
    security_groups.#:                                 "0" => "<computed>"

Although it's already in the terraform.tfstate

                "aws_instance.mongod": {
                    "type": "aws_instance",
                    "primary": {
                        "id": "i-32f24a8f",
                        "attributes": {
                            "ami": "ami-d3a04fbc",
                            "associate_public_ip_address": "false",
                            "availability_zone": "eu-central-1a",
                            "disable_api_termination": "false",
                            "ebs_block_device.#": "3",
                            "ebs_block_device.1376874904.delete_on_termination": "true",
                            "ebs_block_device.1376874904.device_name": "/dev/xvdf",
                            "ebs_block_device.1376874904.encrypted": "false",
                            "ebs_block_device.1376874904.iops": "2000",
                            "ebs_block_device.1376874904.snapshot_id": "snap-e726c30c",
                            "ebs_block_device.1376874904.volume_size": "800",
                            "ebs_block_device.1376874904.volume_type": "io1",
                            "ebs_block_device.1494882292.delete_on_termination": "true",
                            "ebs_block_device.1494882292.device_name": "/dev/xvdh",
                            "ebs_block_device.1494882292.encrypted": "false",
                            "ebs_block_device.1494882292.iops": "100",
                            "ebs_block_device.1494882292.snapshot_id": "snap-398501d2",
                            "ebs_block_device.1494882292.volume_size": "10",
                            "ebs_block_device.1494882292.volume_type": "io1",
                            "ebs_block_device.1712666200.delete_on_termination": "true",
                            "ebs_block_device.1712666200.device_name": "/dev/xvdg",
                            "ebs_block_device.1712666200.encrypted": "false",
                            "ebs_block_device.1712666200.iops": "250",
                            "ebs_block_device.1712666200.snapshot_id": "snap-b60e035e",
                            "ebs_block_device.1712666200.volume_size": "25",
                            "ebs_block_device.1712666200.volume_type": "io1",
                            "ebs_optimized": "false",
                            "ephemeral_block_device.#": "0",
...

Please tell me if you need more info about the issue.

@hashibot
Copy link
Author

This comment was originally opened by @serdardalgic as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


I've solved my problem, just to inform you: I was building with AMIs that already have 3 ebs block devices. Then, in terraform, I was provisioning them with cloud-init. That's why in terraform plan output, there are actually 6 different ebs_block_device ids that were creating the trouble. So, on my side, the problem does not exist. Sorry for the confusion.

@hashibot
Copy link
Author

This comment was originally opened by @madamedwards as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


I'm having a similar issue. Mine seems to have something to do with the volumes being encrypted. When I run terraform apply, I get the following:

kms_key_id: "arn:aws:kms:us-east-1:<account_id>:key/<key_id>" => "<key_id (unchanged)>" (forces new resource)

As noted, the key did not change.

@hashibot
Copy link
Author

This comment was originally opened by @dennybaa as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


+1

@hashibot
Copy link
Author

This comment was originally opened by @mr510 as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


+1 In terraform v 0.7 I am getting similar error. When encrypting devices in a aws_db_instance.

kms_key_id: "arn:aws:kms:us-west-2::key/key_id" => "Key-ID" (forces new resource)

Key id is the same and have not changed.

@hashibot
Copy link
Author

This comment was originally opened by @mr510 as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


Hmm so the issues can be resolved by using the arn instead of key_id

Forces new resource kms_key_id = "${aws_kms_key.key_name.key_id}"

Change it to kms_key_id = "${aws_kms_key.key_name.arn}". No new resource created.

@hashibot
Copy link
Author

This comment was originally opened by @iwat as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


Same here, terraform tried to re-create EC2 instance with additional EBS.

ebs_block_device.#:                                "1" => "1"
ebs_block_device.1399095401.delete_on_termination: "true" => "false"
ebs_block_device.1399095401.device_name:           "/dev/sdb" => ""
ebs_block_device.2576023345.delete_on_termination: "" => "true" (forces new resource)
ebs_block_device.2576023345.device_name:           "" => "/dev/sdb" (forces new resource)
ebs_block_device.2576023345.encrypted:             "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.iops:                  "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.snapshot_id:           "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.volume_size:           "" => "8" (forces new resource)
ebs_block_device.2576023345.volume_type:           "" => "gp2" (forces new resource)

Inside terraform.tfstate:

"ebs_block_device.#": "1",
"ebs_block_device.1399095401.delete_on_termination": "true",
"ebs_block_device.1399095401.device_name": "/dev/sdb",
"ebs_block_device.1399095401.encrypted": "false",
"ebs_block_device.1399095401.iops": "100",
"ebs_block_device.1399095401.snapshot_id": "snap-9c8bf919",
"ebs_block_device.1399095401.volume_size": "8",
"ebs_block_device.1399095401.volume_type": "gp2",

It stops re-creating instance if I change my .tf file from:

ebs_block_device {
    device_name           = "/dev/sdb"
    volume_type           = "gp2"
    volume_size           = 8
    delete_on_termination = true
}

to

ebs_block_device {
    device_name           = "/dev/sdb"
    volume_type           = "gp2"
    volume_size           = 8
    delete_on_termination = true
    snapshot_id           = "snap-9c8bf919"
}

@hashibot
Copy link
Author

This comment was originally opened by @jurajseffer as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


terraform 0.7.11 here. I use remote statefile (S3). When I run plan after apply, terraform reports:

ebs_block_device.#:                                "0" => "1"
ebs_block_device.3239300295.delete_on_termination: "" => "false" (forces new resource)
ebs_block_device.3239300295.device_name:           "" => "/dev/sda1" (forces new resource)
ebs_block_device.3239300295.encrypted:             "" => "<computed>" (forces new resource)
ebs_block_device.3239300295.iops:                  "" => "<computed>" (forces new resource)
ebs_block_device.3239300295.snapshot_id:           "" => "<computed>" (forces new resource)
ebs_block_device.3239300295.volume_size:           "" => "100" (forces new resource)
ebs_block_device.3239300295.volume_type:           "" => "gp2" (forces new resource)

Statefile has the following for the instance:

"root_block_device.#": "1",
"root_block_device.0.delete_on_termination": "false",
"root_block_device.0.iops": "300",
"root_block_device.0.volume_size": "100",
"root_block_device.0.volume_type": "gp2",

config

ebs_block_device {
    device_name = "/dev/sda1"
    volume_type = "gp2"
    volume_size = "${var.instance_volume_size}"
    delete_on_termination = false
  }

@hashibot
Copy link
Author

This comment was originally opened by @jstlaurent as hashicorp/terraform#5006 (comment). It was migrated here as part of the provider split. The original comment is below.


I'm also getting this with Terraform 0.7.13.

My instance definition looks like this:

resource "aws_instance" "app_instance" {
  ami = "${data.aws_ami.ecs_optimized.id}"
  instance_type = "${var.app_instance["instance_type"]}"
  count = "${var.app_instance["instance_count"]}"

  # Some storage
  ebs_block_device {
    device_name = "/dev/sdb"
    volume_size = 50
    volume_type = "gp2"
  }
  ebs_block_device {
    device_name = "/dev/sdc"
    volume_size = 50
    volume_type = "gp2"
  }
  ebs_block_device {
    device_name = "/dev/sdd"
    volume_size = 50
    volume_type = "gp2"
  }
  ebs_block_device {
    device_name = "/dev/sde"
    volume_size = 50
    volume_type = "gp2"
  }
}

I get this every time I run plan:

ebs_block_device.#:                                "5" => "4"
ebs_block_device.2554893574.delete_on_termination: "true" => "true" (forces new resource)
ebs_block_device.2554893574.device_name:           "/dev/sdc" => "/dev/sdc" (forces new resource)
ebs_block_device.2554893574.encrypted:             "false" => "<computed>"
ebs_block_device.2554893574.iops:                  "150" => "<computed>"
ebs_block_device.2554893574.snapshot_id:           "" => "<computed>"
ebs_block_device.2554893574.volume_size:           "50" => "50" (forces new resource)
ebs_block_device.2554893574.volume_type:           "gp2" => "gp2" (forces new resource)
[Other three EBS blocks show the same messages.]

One explanation is that the AMI I'm using (Amazon ECS Optimized, ami-6df8fe7a), defines two block devices. The output from aws ec2 describe-images --image-id ami-6df8fe7a:

{
    "Images": [
        {
            "VirtualizationType": "hvm", 
            "Name": "amzn-ami-2016.09.c-amazon-ecs-optimized", 
            "Hypervisor": "xen", 
            "ImageOwnerAlias": "amazon", 
            "EnaSupport": true, 
            "SriovNetSupport": "simple", 
            "ImageId": "ami-6df8fe7a", 
            "State": "available", 
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/xvda", 
                    "Ebs": {
                        "DeleteOnTermination": true, 
                        "SnapshotId": "snap-441eb8ad", 
                        "VolumeSize": 8, 
                        "VolumeType": "gp2", 
                        "Encrypted": false
                    }
                }, 
                {
                    "DeviceName": "/dev/xvdcz", 
                    "Ebs": {
                        "DeleteOnTermination": true, 
                        "Encrypted": false, 
                        "VolumeSize": 22, 
                        "VolumeType": "gp2"
                    }
                }
            ], 
            "Architecture": "x86_64", 
            "ImageLocation": "amazon/amzn-ami-2016.09.c-amazon-ecs-optimized", 
            "RootDeviceType": "ebs", 
            "OwnerId": "591542846629", 
            "RootDeviceName": "/dev/xvda", 
            "CreationDate": "2016-12-07T18:14:59.000Z", 
            "Public": true, 
            "ImageType": "machine", 
            "Description": "Amazon Linux AMI 2016.09.c x86_64 ECS HVM GP2"
        }
    ]
}

The line ebs_block_device.#: "5" => "4" makes me think that one of the AMI-defined blocks is designed as the root block and that the other is considered a standard EBS block. But since that second block is not tracked in the configuration, every plan or apply sees only four managed blocks in my configuration and find five blocks exist in AWS. So it resets all the blocks to the configuration definition.

The workaround seems to be to define the EBS devices of the AMI (beyond the first) in my configuration, by adding this:

ebs_block_device {
  device_name = "/dev/xvdcz"
  volume_size = 22
  volume_type = "gp2"
}

That fixes the issue, but I have to remember to update it if the AMI change. Less than ideal, unfortunately, since I grabbing the latest.

@radeksimko radeksimko added the service/ec2 Issues and PRs that pertain to the ec2 service. label Jan 25, 2018
@pawel-miszkurka-cko
Copy link

pawel-miszkurka-cko commented Jan 25, 2018

I have a similar issue. Whenever I add aws_volume_attachement block, instance gets recreated every single time I run terraform apply.

module "packages_instance" {
  source = "../../../../modules/ec2-instance/"

  count                  = 1
  name                   = "${format("%s-%s-a", var.env, var.service)}"
  ami                    = "${var.amazon_linux_ami}"
  instance_type          = "${var.instance_size}"
  subnet_id              = "${element(data.terraform_remote_state.vpc.it_subnets, 0)}"
  vpc_security_group_ids = ["${module.packages_sg.this_security_group_id}", "${data.terraform_remote_state.client_admin_sg.this_security_group_id}"]
  key_name               = "${var.key_name}"

  tags = "${merge(var.default_tags, map("Name", format("%s-%s-a", var.env, var.service)))}"
}

resource "aws_volume_attachment" "ebs_att" {
  device_name  = "/dev/xvdb"
  volume_id    = "${data.terraform_remote_state.ebs.volume_id}"
  instance_id  = "${module.packages_instance.id}"
}```

Can anyone suggest any works around that problem, please? 

@pawel-miszkurka-cko
Copy link

The current workaround for me is to use resource instead of a module for ec2 instance.

@marcinrozanski
Copy link

marcinrozanski commented Feb 26, 2019

Terraform v0.11.11
provider.aws: version = "~> 1.59"

Same issue here, EC2 instance t2.small gets re-created each time I add the following EBS block.

ebs_block_device = {
delete_on_termination = "true"
device_name = "/dev/sda1"
}

@leosalcie
Copy link

Same issue here, running Terraform v0.14.4. Everytime I ran "terraform apply" ec2 instances is recreated.
I already tried adding "lifecycle ignore changes", not luck.

lifecycle {
ignore_changes = [ebs_block_device, root_block_device]

root_block_device {
volume_type = var.root_volume_type
volume_size = var.root_volume_size
}

ebs_block_device {
device_name = var.ebs_volume_name
volume_size = var.ebs_volume_size
volume_type = var.ebs_volume_type
}

@github-actions
Copy link

github-actions bot commented Jan 3, 2023

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@github-actions github-actions bot added the stale Old or inactive issues managed by automation, if no further action taken these will get closed. label Jan 3, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 2, 2023
@github-actions
Copy link

github-actions bot commented Mar 5, 2023

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 5, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. stale Old or inactive issues managed by automation, if no further action taken these will get closed.
Projects
None yet
Development

No branches or pull requests

5 participants