Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_launch_template update from 3.58 -> 3.59 state not migrated #20977

Closed
mbelang opened this issue Sep 21, 2021 · 9 comments
Closed

aws_launch_template update from 3.58 -> 3.59 state not migrated #20977

mbelang opened this issue Sep 21, 2021 · 9 comments
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.

Comments

@mbelang
Copy link

mbelang commented Sep 21, 2021

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Affected Resource(s)

  • aws_launch_template

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

resource "aws_launch_template" "managed_worker_lt" {
  for_each = local.node_groups_per_azs

  name                   = "${local.cluster-name}-${each.key}-${each.value.node_group["instance-types"]}-lt"
  update_default_version = true
  user_data              = base64encode(data.template_file.managed_workers_userdata[each.key].rendered)
  image_id               = data.aws_ami.managed_workers[each.key].image_id
  tags                   = local.tags

  # Root device name for workers based on AMI.
  ebs_optimized = !contains(local.ebs_optimized_not_supported, each.value.node_group["instance-types"])

  block_device_mappings {
    device_name = data.aws_ami.managed_workers[each.key].root_device_name
    ebs {
      volume_size           = each.value.node_group["instance-volume-sizes"]
      volume_type           = "gp3"
      encrypted             = true
      delete_on_termination = true
    }
  }

  network_interfaces {
    security_groups             = [aws_security_group.eks-nodes.id]
    associate_public_ip_address = false
    delete_on_termination       = true
  }

  dynamic "tag_specifications" {
    for_each = ["instance", "volume"]
    content {
      resource_type = tag_specifications.value
      tags = merge(local.tags, lookup(each.value.node_group, "extra-tags", {}),
        {
          "Name"                                            = "${local.cluster-name}-${each.key}"
          "k8s.io/cluster-autoscaler/${local.cluster-name}" = "owned"
          "k8s.io/cluster-autoscaler/enabled"               = "true"
        }
      )
    }
  }

  metadata_options {
    http_put_response_hop_limit = 2
    http_endpoint               = "enabled"
    http_tokens                 = "optional"
  }

  lifecycle {
    create_before_destroy = true
  }
}

Plan Output

  # aws_launch_template.managed_worker_lt["REDACTED] will be updated in-place
  ~ resource "aws_launch_template" "managed_worker_lt" {
      ~ default_version         = 2 -> (known after apply)
        id                      = "REDACTED"
      ~ latest_version          = 2 -> (known after apply)
        name                    = "some_name_subnet-xxxx-m5.xlarge-lt"
        tags                    = {
            "env"            = "prod"
        }
        # (9 unchanged attributes hidden)


      ~ metadata_options {
          + http_protocol_ipv6          = "disabled"
            # (3 unchanged attributes hidden)
        }


        # (4 unchanged blocks hidden)
    }

Expected Behavior

New default variable triggers aws_launch_template update which triggers node rolling upgrade because their launch configuration would change.

Actual Behavior

New parameters with proper default value should not trigger update of resource but the state should be migrated to reflect the new default value change.

Steps to Reproduce

  1. update from provider 3.58 to 3.59
  2. terraform plan

References

@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/ec2 Issues and PRs that pertain to the ec2 service. labels Sep 21, 2021
@justinretzolk justinretzolk added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 21, 2021
@ewbankkit
Copy link
Contributor

I think the problem here is that http_protocol_ipv6

"http_protocol_ipv6": {
Type: schema.TypeString,
Optional: true,
Default: ec2.LaunchTemplateInstanceMetadataProtocolIpv6Disabled,
ValidateFunc: validation.StringInSlice(ec2.LaunchTemplateInstanceMetadataProtocolIpv6_Values(), false),
},

needs to be Computed rather than have a Default value.

@ewbankkit
Copy link
Contributor

ewbankkit commented Sep 23, 2021

Hmm, no. If it's Computed we get:

Error: InvalidParameterValue: A value of ‘’ is not valid for http-protocol-ipv6. Valid values are ‘enabled’ or ‘disabled’.

@ingshtrom
Copy link

Has there been any more research on this since it happened? We are running into the same thing and we tried terraform apply -refresh-only -target=... to attempt to fix the state, but it did not help.

We have also tried manually adding the property in the state, updating our DynamoDB md5 hash, and then re-running the terraform plan--it still saw it as a change to the resource.

In our smaller environment, we took the blow and cycled all of the EC2 instances in our EKS managed nodegroup. Doing that for production is a much bigger task and we would prefer not to do that.

@pdechandol
Copy link

Hi everybody. I've just met the same problem. It occurred for me after upgrading my providers and migrating my Terraform code to replace module EKS by native resources.

I solved the problem by removing my aws_launch_template from the state : terraform state rm ... and then applied terraform apply -target=...
Then everything was OK

@ingshtrom
Copy link

ingshtrom commented Feb 10, 2022

@pdechandol, doesn't that create a resource and leave the original resource orphaned?

@pdechandol
Copy link

@ingshtrom You're right, working like this, I've created a new resource aws_launch_template. It was quite confortable for me because I also wanted to create a new aws_autoscaling_group based on this new launch template.

If you want to keep your existing resource, maybe you could investigate the terraform import... but not sure about it

@ingshtrom
Copy link

Interesting. I hadn't thought about doing a remove and then import 🤔 . I may have to try that.

@ewbankkit
Copy link
Contributor

This was addressed via #22277.

@github-actions
Copy link

github-actions bot commented May 9, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.
Projects
None yet
Development

No branches or pull requests

5 participants