Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

while running terraform destroy it deletes the EBS volume as well along with the instance, we do not want that for production. The EBS volume should just be detached and not destroyed/deleted #4293

Open
ghost opened this issue Apr 21, 2018 · 9 comments
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.

Comments

@ghost
Copy link

ghost commented Apr 21, 2018

This issue was originally opened by @ruchikhanuja as hashicorp/terraform#17889. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.7

...

Terraform Configuration Files

resource "aws_ebs_volume" "storage" {
availability_zone = "${data.aws_subnet.this.availability_zone}"
type = "${var.ebs_storage_type}"
size = "${var.ebs_storage_size}"
}

resource "aws_volume_attachment" "ebs_assoc" {
depends_on = ["aws_ebs_volume.storage"]
device_name = "/xyz"
volume_id = "${aws_ebs_volume.storage.*.id[count.index]}"
instance_id = "${module.brokers.instance_ids[count.index]}"
skip_destroy = true
}

Debug Output

Crash Output

Expected Behavior

when do terraform destroy only the instance should be destroyed and not the EBS volume

Actual Behavior

with terraform destroy EBS volume is also getting deleted, though skip_destroy = true

Steps to Reproduce

Additional Context

References

@devonbleak
Copy link
Contributor

I would say this is working as intended - you're declaring the EBS volume resource and its lifecycle is being managed by Terraform.

Since what you're requested would lead to Terraform leaving unmanaged resources behind, what you may want to do is create the volume manually and use a data source to import its settings and work with it from there.

Looking at the original PR that introduced skip_destroy on the volume attachment resource, the intended use case is not to preserve the volume itself, but to skip the destruction of the actual attachment to the instance so that the filesystem can be left in a consistent state on an externally-managed volume when the instance is destroyed. If you declare the EBS volume as a resource on the same lifecycle then it's going to get destroyed regardless of the setting on the aws_ebs_volume_attachment.

@radeksimko radeksimko added bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. labels Apr 24, 2018
@pfalcone
Copy link

pfalcone commented Jul 21, 2018

The workaround is a bit cumbersome in my opinion.

In my case I've built a Jenkins server and the separate EBS volume for the data drive using terraform. In the event that we'll need to destroy the Jenkins server but wish to move the data drive to a new machine to be attached there we simply can't do so via Terraform, as the data drive is recommended to be destroyed when the Jenkins server is destroyed, even with skip_destroy=true set on aws_ebs_volume_attachment, and prevent_destroy=true on the aws_ebs_volume:

 terraform plan -destroy -var-file=./dev.tfvars  -target=aws_instance.jenkins-ci
Acquiring state lock. This may take a few moments...

<--snipped-->
Releasing state lock. This may take a few moments...

Error: Error running plan: 1 error(s) occurred:

* aws_ebs_volume.jenkins_data_vol: aws_ebs_volume.jenkins_data_vol: the plan would destroy this resource, but it currently has lifecycle.prevent_destroy set to true. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or adjust the scope of the plan using the -target flag

@queglay
Copy link

queglay commented Jan 14, 2019

A solution for this would be good to see. For example, in cloudformation its possible to set a condition for some resources to keep them after a template is deleted.

it makes a lot of sense for ebs volumes, and to be able to maintain the volume id / mount point somewhere so that it can easily be reattached upon the next apply.

@guillermo-menjivar
Copy link
Contributor

guillermo-menjivar commented Apr 23, 2019

could you set the ebs and the instance on their own terraform states and make the ebs volume available to the instance via remote-state ? I will give this a test see if it works

@juanluisbaptiste
Copy link

@videte47 did your test work ?

@queglay
Copy link

queglay commented Nov 20, 2019

Although this was problematic in my own scenario, ansible solved the problem in my workflow - https://medium.com/faun/attaching-a-persistent-ebs-volume-to-a-self-healing-instance-with-ansible-d0140431a22a

@spstarr
Copy link

spstarr commented Oct 13, 2021

I concur, data IS more important than the Terraform state, you can recreate the resource, but not the data lost, just as AWS allows you to destroy an EC2 it warns volumes will not be destroyed that same behavior should be respected in Terraform.

@queglay
Copy link

queglay commented Oct 13, 2021

It all depends on the use case. If you are deploying configuration as code and the instance deployed state is immutable, then you probably do want to delete the EBS volume more often than not.

If its being manually configured or its a remote workstation / cloud9 instance, then preservation of the EBS volume could more often be desirable.

@noahehall
Copy link

noahehall commented Dec 18, 2022

ebs_block_device is capable of data persistence

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.
Projects
None yet
Development

No branches or pull requests

8 participants