Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

on-destroy provisioners not being executed #13549

Closed
IOAyman opened this issue Apr 11, 2017 · 70 comments · Fixed by #35230
Closed

on-destroy provisioners not being executed #13549

IOAyman opened this issue Apr 11, 2017 · 70 comments · Fixed by #35230
Labels
core enhancement v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@IOAyman
Copy link

IOAyman commented Apr 11, 2017

Terraform Version

v0.9.2

Affected Resource(s)

  • digitalocean_droplet
  • provisioners (on-destroy)

Terraform Configuration Files

...
resource "digitalocean_droplet" "kr_manager" {
  name     = "${var.do_name}"
  image    = "${var.do_image}"
  region   = "${var.do_region}"
  size     = "${var.do_size}"
  ssh_keys = [XXX]

  provisioner "local-exec" {
    command = "echo ${digitalocean_droplet.kr_manager.ipv4_address} >> hosts"
  }
  provisioner "remote-exec" {
    inline = ["dnf install -y python python-dnf"]
    connection {
      type        = "ssh"
      user        = "${var.ssh_user}"
      private_key = "${file(var.ssh_key)}"
      timeout     = "1m"
    }
  }
  provisioner "local-exec" {
    command = "ansible-playbook ${var.play}"
  }
  provisioner "local-exec" {
    command = "docker-machine create --driver generic --generic-ip-address ${digitalocean_droplet.kr_manager.ipv4_address} --generic-ssh-key ${var.ssh_key} ${var.do_name}"
  }
  provisioner "local-exec" {
    when    = "destroy"
    command = "rm hosts"
  }
  provisioner "local-exec" {
    when    = "destroy"
    command = "docker-machine rm -f ${var.do_name}"
  }
}
...

Debug Output

https://gist.github.com/IOAyman/3e86d9c06d03640786184c1429376328

Expected Behavior

It should have run the on-destroy provisioners

Actual Behavior

It did not run the on-destroy provisioners

Steps to Reproduce

  1. terraform apply -var-file=infrasecrets.tfvars
  2. terraform destroy -var-file=infrasecrets.tfvars

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

@apparentlymart
Copy link
Contributor

Hi @IOAyman! Thanks for opening this.

This is indeed related to both #13097 and #13395, and seems to be another example of the same general problem. However, each of these would be solved in a different part of the codebase so I'm going to leave all three open with these connectors between them though ultimately we will probably attempt to fix them all with one PR in the end.

@Gary-Armstrong
Copy link

I'm having this issue in a local-exec provisioner when I am tainting aws_instance resources, but I can't tell from this issue text if I'm hitting this particular bug. Are the same mechanism used to -/+ after taint as when a explicit terraform destroy is used?

@apparentlymart
Copy link
Contributor

@Gary-Armstrong I'm almost certain that you've found another variant of the same root cause there. Thanks for mentioning it!

Having slept on it a bit I've changed my mind and am going to fold all of these issues into this one as an umbrella issue for addressing the various places that destroy provisioners aren't working yet, since I strongly suspect that the fix will be a holistic one that covers all of these cases together.

Note for me or someone else who picks this up in future: the root challenge here is that the destroy provisioners currently live only in config, but yet most of our destroy use-cases don't have access to config for one reason or another. I think the holistic fix here is to change how destroy provisioners work so that during apply we include the fully-resolved destroy provisioner configs as part of the instance state, and then when we destroy those instances we use the stashed config in the state to destroy them, thus avoiding the direct dependency on config during destroy. This can then address all of the variants of the problem we've seen so far, and probably ones we've not yet seen:

  • Resource has been removed from config altogether ("orphaned", in core terminology)
  • Instance has been deposed as part of the create_before_destroy lifecycle
  • Instance has been tainted (this one is tricky, since we don't know if the create provisioner ran in this case; we may need to punt on making this work since we don't know if a particular destroy provisioner depends on the outcome of a particular create provisioner)

@Gary-Armstrong
Copy link

More info: I eliminated some instances by setting count = 0 and the local-exec ran properly.

@in4mer
Copy link

in4mer commented Apr 14, 2017

I'd like to add some color here:

when = "destroy" provisioners are also not being run for aws_instances that are marked as tainted, which are then destroyed and re-created. This is on v0.9.3.

I see that this was #3 in the above comment, sorry for adding more traffic. A word on the mind of the maintainer, here, though.. I consider it my responsibility to design idempotency around the create-destroy provisioners. I expect TF to be a trigger, based on when specific events start, not at some point during their execution. I can design my provisioners to error when I want them to, and ignore errors that I don't consider constructive; thereby avoiding the whole discussion of "when should we run these?"

Maybe the destroyers should have an option or two allowing us to customize when/where they fire? Food for thought.

My current destruction provisioner (This might be helpful to see my position):

  provisioner "local-exec" {
    when    = "destroy"
    command = "knife node delete ${self.tags.Name} -y; knife client delete ${self.tags.Name} -y; true"
   /* The reason we're ignoring the return values is either it works, or it
      wasnt there to begin with. Or you don't have permissions, but it's going
      to wind up desynchronized anyway. */
  }

An alternative would be a way to gate the overall instance termination on the relative success or failure of other designated dependent destructors prior to outright instance destruction. I wonder if that can be accomplished by putting the local-exec destructor before the Chef provisioner, but I haven't tested to see if that would work or not. Then I could save the desync by designing my destructor to stop terraform destroy in a better way.

@leftathome
Copy link

In the case of a tainted resource, tracking the individual run results and configurations of provisioners against the resource as part of resource state should help Terraform in deciding whether a provisioner needs to run its destroy action on the resource, with additional guidance from the resource's configuration.

For provisioners that kick off config management tools, a successful run usually indicates there's something there that needs to be torn down at destroy time. There will probably be a common set of actions that each CM tool uses for decommissioning, as in @in4mer's example where API calls need to get made to remove the instance from a Chef server.

(I actually found this thread because I thought that's how Terraform's Chef provisioner already worked!)

Remote or local exec are more open-ended, so they might need to have destroy-stage actions explicitly defined in the resource config, defaulting to noop.

@pierrebeaucamp
Copy link

pierrebeaucamp commented Sep 22, 2017

Edit: Sorry, it appears to run correctly now. Please ignore this whole comment

I have the same issue with a null_resource and a local-exec provisioner in it. Creation works fine, but terraform destroy completely skips over the when = "destroy" command.

@in4mer
Copy link

in4mer commented Sep 29, 2017

I'm bumping this because it's still an issue.

Honestly, IDGAF about we're waiting for eleventeen planets to align so we can have a celestially perfect tea ceremony to please our ancestors in the afterlife but we'll have to wait six more years for it to happen. This issue needs a fix; gate the destroy provisioner blocks off the resource they're attached to. If the resource was there, run the provisioners and let the admins sort it out.

@bizmate
Copy link

bizmate commented Oct 11, 2017

I have seen a problem related to this but I think I know what is happening. I am destroying an instance and inside it I have a provisioner set with when="destroy" . This is not running because the instance networking is taken down before the remote-exec can be run and then when the provisioner is run it cannot ssh on the machine.

terraform destroy -target=aws_instance.biz_gocd -force
aws_vpc.vpc_biz_dev: Refreshing state... (ID: vpc-4ab68733)
aws_key_pair.auth: Refreshing state... (ID: biz-GoCD-Key)
aws_security_group.biz_dev: Refreshing state... (ID: sg-c5062db6)
aws_subnet.subnet_biz_dev: Refreshing state... (ID: subnet-e9c91e8d)
aws_instance.biz_gocd: Refreshing state... (ID: i-029333e7696ca72c9)
aws_eip_association.eip_assoc: Destroying... (ID: eipassoc-a6a11391)
aws_eip_association.eip_assoc: Destruction complete after 1s
aws_instance.biz_gocd: Destroying... (ID: i-029333e7696ca72c9)
aws_instance.biz_gocd: Provisioning with 'remote-exec'...
aws_instance.biz_gocd (remote-exec): Connecting to remote host via SSH...
aws_instance.biz_gocd (remote-exec):   Host:
aws_instance.biz_gocd (remote-exec):   User: ubuntu
aws_instance.biz_gocd (remote-exec):   Password: false
aws_instance.biz_gocd (remote-exec):   Private key: false
aws_instance.biz_gocd (remote-exec):   SSH Agent: true
aws_instance.biz_gocd (remote-exec): Connecting to remote host via SSH...
....
timeout

I dont think this should be expected behaviour. Instead destroy provisioner should be queued before any other changes

@hantuzun
Copy link

hantuzun commented Feb 8, 2018

This issue bugs us with our Terraform managed DC/OS cluster. Since we can't run destroy provisioners (for killing dcos-mesos-slave service), the jobs on our destroyed nodes are not moved to other nodes timely...

@bizmate's comment is interesting. There could be an easy fix there for some (maybe the majority) use cases.

@philwinder
Copy link

philwinder commented Feb 12, 2018

I see the same issue as bizmate. In my case I'm trying to run a destroy provisioner on ebs volume attachments, but it seems like we lose the network routes before the provisioner has run. My case is slightly different as I'm going through a bastion and the code below is in a module (this could be the edge case).

resource "aws_volume_attachment" "volume_attachments" {
...
  # Fix for https://github.com/terraform-providers/terraform-provider-aws/issues/2084
  provisioner "remote-exec" {
    inline     = ["sudo poweroff"]
    when       = "destroy"
    on_failure = "continue"

    connection {
      user         = "centos"
      host         = "${element(aws_instance.cluster_nodes.*.private_ip, count.index % var.num_nodes)}"
      private_key  = "${file(var.private_key_path)}"
      bastion_host = "${var.bastion_public_ip}"
      agent        = false
    }
  }

  provisioner "local-exec" {
    command = "sleep 30"
    when    = "destroy"
  }
}

@mavogel
Copy link

mavogel commented Mar 8, 2018

I found a workaround with a null_resource which can be used for more fine-grained provisioning.
The following works for me with a successful execution of the destroy-provisioner of an aws_instance by executing the teardown.sh script.

resource "aws_instance" "my_instance" {
   ...  
}

resource "null_resource" "my_instance_provisioning" {
  triggers {
    uuid = "${aws_instance.my_instance.id}"
  }

  provisioner "remote-exec" {
    inline = [
      "bash setup.sh",
    ]
  }

  provisioner "remote-exec" {
    when = "destroy"

    inline = [
      "bash teardown.sh",
    ]
  }

The clue is that the null_resource will be destroyed before the aws_instance is destroyed, hence it can still establish a connection. Hope that help you folks same some time and make better and cleaner infrastructures :)

@matikumra69
Copy link

I am getting error:
aws_instance.ec2_instance: aws_instance.ec2_instance: self reference not allowed: "aws_instance.ec2_instance.id"

@matikumra69
Copy link

matikumra69 commented Mar 8, 2018

Here is my code ....

resource "aws_instance" "ec2_instance" {
  ami           = "${var.AMI_ID}"
  instance_type = "${var.ec2_type}"
  key_name = "${var.ec2_keyname}"
  vpc_security_group_ids = ["${var.ec2_security_group_id}"]
  subnet_id = "${var.ec2_subnet_id}"
  iam_instance_profile = "${var.ec2_role_name}"
 ........
resource "null_resource" "my_instance_provisioning" {
  triggers {
    uuid = "${aws_instance.ec2_instance.id}"
  }
#provisioner "local-exec" {
#    command = "sleep 30"
#    when    = "destroy"
#}

provisioner "file" {
        source = "scripts/teardown.sh"
        destination = "/tmp/teardown.sh"
        connection {
                        type     = "ssh"
                        user     = "${var.ec2_runuser}"
                }
    }


provisioner "remote-exec" {
    inline = [
      "sudo chmod +x /tmp/teardown.sh",
      "sudo /tmp/teardown.sh",
    ]
    when = "destroy"
    connection {
    type     = "ssh"
    user     = "${var.ec2_runuser}"
  }
}
}

@mavogel
Copy link

mavogel commented Mar 8, 2018

hi @matikumra69: could you put your code in an environment, then it's better readable and I can help you with that :) Use the insert_code

Update: @matikumra69 can you provide the whole code for aws_instance? Your error has probably nothing to do with my proposal but more that within the aws_instance your refer to itself by calling for example ${aws_instance.ec2_instance.private_ip}. You should use ${self.private_ip} in this case.

@matikumra69
Copy link

Hi @mavogel, I am getting this error.
module root:
module instance_module.root: 1 error(s) occurred:

  • resource 'aws_instance.ec2_instance' config: cannot contain self-reference self.private_ip

This is what I am doing .....

resource "null_resource" "my_instance_provisioning" {
triggers {
uuid = "${self.private_ip}"
}
provisioner "remote-exec" {
inline = [
"sudo touch /tmp/abc",
]
connection {
type = "ssh"
user = "${var.ec2_runuser}"
}
}
}

@mavogel
Copy link

mavogel commented Mar 16, 2018

hi @matikumra69, first: please use code block: A guide is here. Then your code is better readable.

Btw: a null_resource has no property private_ip, you should pass it in as follows:

resource "null_resource" "my_instance_provisioning" {
  triggers {
    uuid = "${aws_instance.ec2_instance.private_ip}"
  }
  provisioner "remote-exec" {
    inline = [
      "sudo touch /tmp/abc",
    ]
  connection {
    type = "ssh"
    user = "${var.ec2_runuser}"
  }
}

It would also help if you could provide your whole code with all definitions as a gist and link it here. With such small code snippets it's hard to solve your issue

@fatmcgav
Copy link
Contributor

fatmcgav commented Apr 6, 2018

@apparentlymart Just touching on your comment regarding 'tainted' instance flow, would supporting that just be a case of removing this EvalIf? https://github.com/hashicorp/terraform/blob/master/terraform/node_resource_destroy.go#L215

As not running the destroy provisioners when tainting a resource is causing us issues...

@asgoel
Copy link

asgoel commented May 24, 2018

Any update on this one? This is causing us issues as we'd love to be able to use a destroy provisioner to do some cleanup on aws instances before they are destroyed. It's currently not working for us when using create_before_destroy (not sure if this is the same issue or not).

@Lasering
Copy link

Lasering commented May 24, 2018

On terraform v0.11.7, with create_before_destroy=true and tainting the resource, the destruction time provisioners - local and remote exec - are not being ran on that resource.

However If I ran a destroy of the entire infrastructure the destroy provisioners are ran.

@tad-lispy
Copy link

tad-lispy commented Jun 20, 2018

It seems to me that just preventing it from running on tainted resources is too heavy-handed. I understand your reasons @apparentlymart, but perhaps it would be better to leave handling it to the user code. Once I have the provisioner running I can do all kinds of checks and conditional execution. I'd much rather have to hack something like that then just not being able to hook into destruction cycle at all.

So maybe you can at least fix that. If you are worried about backward compatibility, then maybe there should be an option, like when = "tainted" or when = [ "tainted", "destroy" ] or something like that.

Also current behavior should be clearly documented. It isn't obvious and there is nothing about it here: https://www.terraform.io/docs/provisioners/index.html#destroy-time-provisioners

@kidmis
Copy link

kidmis commented Feb 2, 2021

Guys, is there any updates? Can we expect to some additional values for when command.
Something like:

  • when = on_destroy (ok, let's keep it as now - run only when terraform destroy command executed)
  • when = on_taint (when resource will be destroyed and re-created)
  • when = on_resource_destroy ( execute it when resource is goint to be deleted, e.g. by terraform apply)

In additional this can be combined in lists for best effort, e.g.:
when = [ tainted, on_destroy ]

@shebang89
Copy link

@kidmis I wouldn't expect for this to happen anytime soon, as was told by teamterraform in 2019:

Hi everyone,

This issue is labelled as an enhancement because the initial design of the destroy-time provisioners feature intentionally limited the scope to run only when the configuration is still present, to allow this feature to be shipped without waiting for a more complicated design that would otherwise have prevented the feature from shipping at all.

We're often forced to make compromises between shipping a partial feature that solves a subset of problems vs. deferring the whole feature until a perfect solution is reached, and in this case we decided that having this partial functionality was better than having no destroy-time provisioner functionality at all.

The limitations are mentioned explicitly in the documentation for destroy-time provisioners, and because provisioners are a last resort we are not prioritizing any development for bugs or features relating to provisioners at this time. We are keeping this issue open to acknowledge and track the use-case, but do not expect to work on it for the foreseeable future.

Please note also that our community guidelines call for kindness, respect, and patience. We understand that it is frustrating for an issue to not see any updates for a long time, and we hope this comment helps to clarify the situation and allow you all to look for alternative approaches to address the use-cases you were hoping to meet with destroy-time provisioners.

I suggest you use terraform destroy -target=RESOURCE as it triggers the destroy-time provisioners. If resource dependencies are chained OK, triggering this on a top-level resource in the dependency tree can do the job on the entire tree.
This can be used for automated deployments like this:
1. terraform destroy -target=RESOURCE
2. terraform plan
3. terraform apply -auto-approve

@ganniterix
Copy link

ganniterix commented May 5, 2021

I want to add another use case. Creating instances with the VMware provisioner inside a module. The script does not get executed when the parent module is removed from the workspace.

resource "null_resource" "decomission" {
  lifecycle {
    create_before_destroy = true
  }

  triggers = {
    id          = vsphere_virtual_machine.instance.id
    user        = var.vm_config.configuration_profile.provisioning_host_user
    private_key = var.vm_config.configuration_profile.provisioning_host_key
    host        = split("/", local.vm_config.network_layout.nics[0].ipaddresses[0])[0]

    bastion_host        = var.vm_config.configuration_profile.provisioning_bastion_use ? var.vm_config.configuration_profile.provisioning_bastion_host : null
    bastion_user        = var.vm_config.configuration_profile.provisioning_bastion_use ? var.vm_config.configuration_profile.provisioning_bastion_use : null
    bastion_private_key = var.vm_config.configuration_profile.provisioning_bastion_use ? var.vm_config.configuration_profile.provisioning_bastion_key : null
    bastion_use         = var.vm_config.configuration_profile.provisioning_bastion_use
  }

  provisioner "file" {
    when = destroy

    destination = "/tmp/decommision.sh"
    content     = templatefile("${path.module}/scripts/decommision.tpl", {})

    connection {
      type        = "ssh"
      user        = self.triggers.user
      private_key = self.triggers.private_key
      host        = self.triggers.host

      bastion_host        = self.triggers.bastion_use ? self.triggers.bastion_key : null
      bastion_user        = self.triggers.bastion_use ? self.triggers.bastion_user : null
      bastion_private_key = self.triggers.bastion_use ? self.triggers.bastion_private_key : null
    }
  }

  provisioner "remote-exec" {
    when = destroy

    inline = [
      "bash -x /tmp/decommision.sh"
    ]

    connection {
      type        = "ssh"
      user        = self.triggers.user
      private_key = self.triggers.private_key
      host        = self.triggers.host

      bastion_host        = self.triggers.bastion_use ? self.triggers.bastion_key : null
      bastion_user        = self.triggers.bastion_use ? self.triggers.bastion_user : null
      bastion_private_key = self.triggers.bastion_use ? self.triggers.bastion_private_key : null
    }
  }
}

Running this with anything other than terraform apply is not really an option, since this is meant into a VCS managed Terraform Enterprise workspace.

@kayman-mk
Copy link

Any news hiere? terraform destroy is not an option as the problem here occurs in a 3rd party module. As I do not know the internals it makes no sense to do this destroy command. Would really be better if it works with terraform apply.

@stonefield
Copy link

It is difficult to understand the reasoning by the terraform team here. This ticket has been open for more than 4 years, and I believe anyone having this need, do not understand why when = destroy is related to the terraform destroy command. The output from terraform when a module has been removed is clearly stating that the resource will be destroyed, hence a bug and not an enhancement.
I firmly believe that anything related to destroy related triggers should be stored in the state file.

@ganniterix
Copy link

It would be great if something could be done about this.

@arbourd
Copy link

arbourd commented Jun 17, 2022

I'm confused.

when = destroy (in my case, #31266) works fine with terraform apply as long as that resource is not using create_before_destroy lifecycle hook

If this is the case: shouldn't this issue be out-of-date, closed, and replaced with something more accurate? @jbardin?

@jbardin
Copy link
Member

jbardin commented Jun 17, 2022

@arbourd, Sorry I'm not sure what you mean. This issue is tracking the cases where destroy provisioners can't currently be executed, with deposed resources from create_before_destroy being one of those cases.

@arbourd
Copy link

arbourd commented Jun 18, 2022

Right, but this issue is very old and has a bunch of complaints about it not working on "apply" at all.

Are the two major issues right now:

  • When the object is deposed
  • When the object is a member of a module

Is that correct?

@jbardin
Copy link
Member

jbardin commented Jun 20, 2022

@arbourd, unfortunately any longstanding issues tend to accumulate large amounts of unrelated or unnecessary comments. Destroy provisioners were implemented with some fundamental shortcomings which are difficult to incorporate into the normal resource lifecycle. The summary above is still representative of the current situation.

@arbourd
Copy link

arbourd commented Jun 28, 2022

For those of us using ssh provisioners with remote-exec, there is a resource that works pretty much the same: https://github.com/loafoe/terraform-provider-ssh

when = "destroy" support was just added

@TomKulakov
Copy link

TomKulakov commented Feb 16, 2023

Can you at least inherit from or clone null_resource into new kind of resource (custom_resource is taken, so maybe undetermined_resource) and implement proper behavior? update/change handler/event when resource is being replaced, actually run destroy when destroyed, not only when terraform destroy is being executed. I'm not a specialist here, but only throwing an idea based on my experience with terraform, what I've read here and in another similar topics and using my judgement. ;)

Now some more spam with confirmations:
I can confirm same issue here in v1.3.7.

Apply does not trigger destroy but it's saying it does.

module.export_data.null_resource.export1: Destroying... [id=12345678901234567890]
module.export_data.null_resource.export1: Destruction complete after 0s

And when I did use terraform destroy the behaviour is correct:

module.export_data.null_resource.export1: Destroying... [id=12345678901234567890]
module.export_data.null_resource.export1: Provisioning with 'local-exec'...
module.export_data.null_resource.export1: (local-exec): Executing: ["/bin/sh" "-c" "echo \"DESTROYING  XO XO XO XO XO\"\n"]
module.export_data.null_resource.export1: (local-exec): DESTROYING  XO XO XO XO XO
module.export_data.null_resource.export1: Destruction complete after 0s

Also apply causing replace does trigger destroy as well. It would be better to have update/change handler instead.

Provisioner definition:

    provisioner "local-exec" {
        command = <<EOT
            echo "DESTROYING XO XO XO XO XO"
            EOT
        on_failure = fail
        when = destroy
    }

Right now to bypass this problem I'm creating purging null_resource which has to be removed from the code after some time. It's ultra uncomfortable and unexpected if someone is using my modules (I have now two of those) for the first time. Instead, resource could be removed manually if someone has permissions of course, but then, it's not following IaC.

@artuross
Copy link

I am surprised this issue is still open after all this time. I have the simplest case: I want to recreate my k8s control planes and for obvious reasons, I first need new servers to be created before I can destroy old servers.

Here's an example to illustrate:

locals {
  hash = "change this to whatever you want and reapply"
}

resource "random_string" "node" {
  length  = 3
  lower   = true
  special = false
  numeric = false
  upper   = false

  keepers = {
    user_data = sha256(local.hash)
  }
}

resource "null_resource" "create" {
  triggers = {
    hash = local.hash
    node = random_string.node.id
  }

  provisioner "local-exec" {
    when    = create
    command = "echo create ${self.triggers.hash} ${self.triggers.node}"
  }

  lifecycle {
    create_before_destroy = true
  }

  depends_on = [
    random_string.node,
  ]
}

resource "null_resource" "destroy" {
  triggers = {
    hash = local.hash
    node = random_string.node.id
  }

  provisioner "local-exec" {
    when    = destroy
    command = "echo destroy ${self.triggers.hash} ${self.triggers.node}"
  }

  lifecycle {
    # comment line below to see the difference
    create_before_destroy = true
  }

  depends_on = [
    random_string.node,
    null_resource.create,
  ]
}

And the output:

random_string.node: Creating...
random_string.node: Creation complete after 0s [id=kjc]
null_resource.create: Creating...
null_resource.create: Provisioning with 'local-exec'...
null_resource.create (local-exec): Executing: ["/bin/sh" "-c" "echo create change this to whatever you want and reapply kjc"]
null_resource.create (local-exec): create change this to whatever you want and reapply kjc
null_resource.create: Creation complete after 0s [id=3429720091763885346]
null_resource.destroy: Creating...
null_resource.destroy: Creation complete after 0s [id=948398361774632729]
null_resource.destroy (deposed object d889f24e): Destroying... [id=5541652494173857564]
null_resource.destroy: Destruction complete after 0s
null_resource.create (deposed object ad4d07d0): Destroying... [id=169389285284865921]
null_resource.create: Destruction complete after 0s
random_string.node (deposed object 892cf9f8): Destroying... [id=jtq]
random_string.node: Destruction complete after 0s

The server (null_resource) is first created and only after that previous server is destroyed. Too bad the command does not run.

On the other hand, if I comment out create_before_destroy = true in null_resource.destroy, my server is destroyed:

null_resource.destroy: Destroying... [id=948398361774632729]
null_resource.destroy: Provisioning with 'local-exec'...
null_resource.destroy (local-exec): Executing: ["/bin/sh" "-c" "echo destroy change this to whatever you want and reapply kjc"]
null_resource.destroy (local-exec): destroy change this to whatever you want and reapply kjc
null_resource.destroy: Destruction complete after 0s
random_string.node: Creating...
random_string.node: Creation complete after 0s [id=xtp]
null_resource.create: Creating...
null_resource.create: Provisioning with 'local-exec'...
null_resource.create (local-exec): Executing: ["/bin/sh" "-c" "echo create change this to whatever you want and reapply2 xtp"]
null_resource.create (local-exec): create change this to whatever you want and reapply2 xtp
null_resource.create: Creation complete after 0s [id=2523110283929799885]
null_resource.destroy: Creating...
null_resource.destroy: Creation complete after 0s [id=5905504673339125500]
null_resource.create (deposed object fabd7d77): Destroying... [id=3429720091763885346]
null_resource.create: Destruction complete after 0s
random_string.node (deposed object e916dd0f): Destroying... [id=kjc]
random_string.node: Destruction complete after 0s

Too bad I've lost all my data.

How is that not a valid use case?

@shizzit
Copy link

shizzit commented Aug 7, 2023

It seems absolutely bonkers that this has been an issue for so long. Something like what @kidmis said here would be perfect here IMO. I'm experiencing this on the latest release (v1.5.4 as of writing)

@akamac
Copy link

akamac commented Oct 6, 2023

We need this to safely update instance types for EKS managed node groups, which cannot be done in-place and forces re-creation.

@VickyPenkova
Copy link

Is there any update here?
Using the latest version of terraform, I am hitting the same and the resource I'm changing gets created with the new config, but the old one is never destroyed.
That scenario happens for route53 records and it's essential functionality for us to have as we end up with duplicate records.

@giner
Copy link

giner commented Jan 9, 2024

What if instead of supporting only destroy for provisioner there would be a separate resource supporting the whole resource lifecycle similar to how provider external works for data? I guess this would cover most if not all cases.

@jbardin
Copy link
Member

jbardin commented Jan 10, 2024

@giner, yes a separate managed resource is often the preferred solution here. Either a custom one which suits your use case, or something more generic like “external” which runs different commands at various points in the resource instance lifecycle. I think there are some existing examples in the public registry.

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 23, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
core enhancement v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.