Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Launch configurations update #2183

Closed
scalp42 opened this issue Jun 2, 2015 · 7 comments
Closed

Launch configurations update #2183

scalp42 opened this issue Jun 2, 2015 · 7 comments

Comments

@scalp42
Copy link
Contributor

scalp42 commented Jun 2, 2015

I'm running into an issue when updating launch configurations (for example updating the instance size).

Based on this config:

resource "aws_launch_configuration" "fortytwo_default" {
#   name = "fortytwo-default"
    image_id = "whatever"
    instance_type = "m1.small"
    iam_instance_profile = "${var.iam_profile}"
    key_name = "${var.key_name}"
    user_data = "${replace(replace(template_file.userdata_packer.rendered, "#ROLE", "default"), "#ENVIRONMENT", "fortytwo")}"
    security_groups = ["${aws_security_group.default.id}", "${aws_security_group.jumphost-clients.id}", "${aws_security_group.nat-clients.id}"]
}

resource "aws_autoscaling_group" "default" {
  depends_on = ["aws_launch_configuration.fortytwo_default"]
  availability_zones = ["${split(",", var.subnet_availability_zones_list)}"]
  name = "fortytwo-default"
  max_size = 5
  min_size = 0
  desired_capacity = 1
  health_check_grace_period = 300
  health_check_type = "ELB"
  force_delete = true
  launch_configuration = "${aws_launch_configuration.fortytwo_default.id}"
  vpc_zone_identifier = ["${aws_subnet.private.*.id}"]
}

I run apply and then try to change the instance size to m3.medium:

resource "aws_launch_configuration" "fortytwo_default" {
#   name = "fortytwo-default"
    image_id = "whatever"
    instance_type = "m3.medium"
    iam_instance_profile = "${var.iam_profile}"
    key_name = "${var.key_name}"
    user_data = "${replace(replace(template_file.userdata_packer.rendered, "#ROLE", "default"), "#ENVIRONMENT", "fortytwo")}"
    security_groups = ["${aws_security_group.default.id}", "${aws_security_group.jumphost-clients.id}", "${aws_security_group.nat-clients.id}"]
}

resource "aws_autoscaling_group" "default" {
  depends_on = ["aws_launch_configuration.fortytwo_default"]
  availability_zones = ["${split(",", var.subnet_availability_zones_list)}"]
  name = "fortytwo-default"
  max_size = 5
  min_size = 0
  desired_capacity = 1
  health_check_grace_period = 300
  health_check_type = "ELB"
  force_delete = true
  launch_configuration = "${aws_launch_configuration.fortytwo_default.id}"
  vpc_zone_identifier = ["${aws_subnet.private.*.id}"]
}

Result from running plan:

    ~ aws_autoscaling_group.default
        launch_configuration: "terraform-sz7sl7q4wfawzadckhlty52uxu" => "${aws_launch_configuration.fortytwo_default.id}"

    -/+ aws_launch_configuration.fortytwo_default
        associate_public_ip_address: "false" => "0"
        ebs_block_device.#:          "0" => "<computed>"
        ebs_optimized:               "false" => "<computed>"
        iam_instance_profile:        "RallyStack" => "RallyStack"
        image_id:                    "whatever" => "whatever"
        instance_type:               "m1.small" => "m3.medium" (forces new resource)
        key_name:                    "particles" => "particles"
        name:                        "terraform-sz7sl7q4wfawzadckhlty52uxu" => "<computed>"
        root_block_device.#:         "0" => "<computed>"
        security_groups.#:           "3" => "3"
        security_groups.191680440:   "sg-ae979bca" => "sg-ae979bca"
        security_groups.2047546322:  "sg-ac979bc8" => "sg-ac979bc8"
        security_groups.3366657482:  "sg-ad979bc9" => "sg-ad979bc9"
        user_data:                   "a2a555351f12fdf49e57c304a2f3072fd967ef76" => "a2a555351f12fdf49e57c304a2f3072fd967ef76"

Result when try to run apply:

    aws_launch_configuration.fortytwo_default: Destroying...
    aws_launch_configuration.fortytwo_default: Error: 1 error(s) occurred:

    * ResourceInUse: Cannot delete launch configuration terraform-sz7sl7q4wfawzadckhlty52uxu because it is attached to AutoScalingGroup fortytwo-default

Thanks in advance for looking at it!

@scalp42
Copy link
Contributor Author

scalp42 commented Jun 2, 2015

#1109 (comment) addresses the problem.

Any chance to be able to generate an UID as a builtin method so that you can still specify name (and have it work with the LC update issue) instead of terraform-ygonqdl6mvedrasbnubdewabia, which is not super user friendly ?

Something better than this:

variable "uid" {
  description = "UID to re-generate resources"
  # date '+%m-%d-%Y_%H_%m_%S'
  default = "06-01-2015_18_06"
}
resource "aws_launch_configuration" "fortytwo_default" {
    name = "fortytwo-default-${var.uid}"
}

If we could just let terraform generate the UID but be able to override the terraform part in the name with maybe something like uid_prefix in the resource ?

I also tried to use a local-exec to run that date command and store it in a file, to be read by the name resource but yeah won't work.

@phinze
Copy link
Contributor

phinze commented Jun 8, 2015

Hi @scalp42 - thanks for the detailed report here.

Sounds like the root issue here is figured out - I just opened #2269 to track the feature that I believe supports the use-case you're talking about at the end here.

I'm going to close this issue, but feel free to let me know if you believe there's some reason we should leave this open. 👍

@BrunoBonacci
Copy link

Hi,
is there any chance to get this issue resolved?

Although the ticket is closed and related (somehow) to #2269 I think this should be dealt separately for the following reasons:

In practice: once created you can't update a auto-scaling group and launch configuration via Terraform, therefore the severity/priority of this issue should be considered higher than the related ticket.

The only way I've managed to updated the launch configuration once created by Terraform, is by following these steps:

  • update Terraform script with new launch configuration (WITHOUT APPLYING THE CHANGES)
  • MANUALLY: go to the console, Clone/copy the launch configuration into a new-one
  • MANUALLY: then, update the ASG to use the cloned launch configuration
  • MANUALLY: delete the old launch configuration
  • finally, run terraform apply to apply the changes which will create the new launch configuration, update the ASG to use the newly created ASG and delete the old one.

As you might imagine this isn't fun, and the copy/clone, as described above, could be done "easily" by terraform under the hood.

Is there any chance to get a resolution any time soon?

@phinze
Copy link
Contributor

phinze commented Sep 4, 2015

Hi @BrunoBonacci - sorry for the frustrating experience you've been having with LCs / ASGs in Terraform.

The specifics of these resources that make this more complicated than other areas of the AWS APIs:

  • Launch Configurations are immutable (create and delete only - no update)
  • Launch Configuration names must be unique per account
  • AutoScaling group names must be unique per account

This places constraints on valid configurations, but there are definitely several ways to successfully manage a complete LC/ASG lifecycle via Terraform.

I recently explained what we do at HashiCorp on the Terraform mailing list which gives a full working example of one such strategy:

https://groups.google.com/d/msg/terraform-tool/7Gdhv1OAc80/iNQ93riiLwAJ

@BrunoBonacci
Copy link

Hi Paul,

many thanks for your suggestions and the sample in the link,
I will try to apply the changes you suggested.

It would be good if you could add the ASG and LC sample you sent me in the Terraform documentation,
this would save loads of time to other people.

Bruno

@adamhathcock
Copy link

This helps a lot.

@ghost
Copy link

ghost commented Apr 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 19, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants