-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot destroy an Aurora RDS cluster when it was built with a replication_source_identifier
value
#6749
Comments
The output of a |
in fact, even if I force a destroy order using phased targets, the secondary cluster still doesnt cleanly go away with the same error. |
This is an expected behavior if am not wrong. This is a deliberate safeguard is put in place from aws in the upstream API to prevent accidental deletion and it would not seem appropriate for terraform to override this. https://aws.amazon.com/premiumsupport/knowledge-center/rds-error-delete-aurora-cluster/ |
If I am asking terraform to destroy, it should destroy. There is already guardrails in terraform that list things that will be destroyed and allowing for approval. |
ah. This is at the Database level. if you promote the read replica to a standalone cluster, then your destroy should go through. |
But how do we promote the cluster to standalone?
This is the definition of our cluster, and if I blank the
|
@SaravanRaman - as @pbeaumontQc mentioned - is it possible to promote the cluster to standalone using terraform? For anyone else coming across this, promotion can be done using aws(1): |
Pinging this issue so it stays alive, unfortunately I landed on this while building a HA and Disaster Recovery setup; |
Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you! |
This is still an outstanding issue and should remain open. Its rather galling that this is has sat for over 2 years without any updates from Hashicorp. The integrity of customer data is obviously of paramount importance so any concerns Terraform users have with incorrect behaviour involving AWS RDS services is a major red flag and needs addressing. |
Warning This issue has been closed, meaning that any additional comments are hard for our team to see. Please assume that the maintainers will not see them. Ongoing conversations amongst community members are welcome, however, the issue will be locked after 30 days. Moving conversations to another venue, such as the AWS Provider forum, is recommended. If you have additional concerns, please open a new issue, referencing this one where needed. |
This functionality has been released in v5.80.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
Community Note
Terraform Version
Affected Resource(s)
*provider.aws v1.50.0
Terraform Configuration Files
Expected Behavior
Running
terraofrm destroy
should destroy everything including both RDS clusters and their VPCsActual Behavior
Destroy works on the primary cluster but fails on the secondary cluster
Steps to Reproduce
terraform apply
terraform destroy
References
See #6672 for a related issue regarding trying to terraform cross region aurora replica clusters.
The text was updated successfully, but these errors were encountered: