-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_ecs_capacity_provider will not destroy properly when needing replacement. #14393
Comments
Faced the same. Try to add
|
@mikalai-t did you actually get that working, or was that just a suggestion to try? We're running into this issue, but an ASG can aparently only have one capacity provider so
|
Did anyone make any progress with this? Facing the same issue. |
@andy-codes @apottere maybe this hack can solve some problems resource "aws_ecs_capacity_provider" "this" {
# Forcing new capacity provider name depends on ASG name
name = aws_autoscaling_group.this.name
auto_scaling_group_provider {
auto_scaling_group_arn = aws_autoscaling_group.this.arn
managed_termination_protection = "ENABLED" # required protect_from_scale_in = true in ASG
managed_scaling {
maximum_scaling_step_size = 2
minimum_scaling_step_size = 1
status = "ENABLED"
target_capacity = 100
}
}
lifecycle {
create_before_destroy = true
}
} |
nope, this does not resolve issue - if you change something different than name, e.g.
I don't know if this works properly for changing the name and I believe it does, but for everything else, without changing the name, I expect to get the very same error as I got for And honestly, I am not sure if we can do anything about that unless AWS allows the capacity provider to be changed without recreation from API, when it doesn't seem to be necessary. |
Any update on this? I'm having the same issue |
We saw this issue and are optimistic that updating our aws provider to 3.47.0+ will provide a workaround due to this feature update: #16942 In older versions of the provider changing almost anything forced a new resource. This bug probably still exists if you try to update the name or ASG ARN, but otherwise you can avoid it. |
This issue still exists with aws v4.21.0 An ugly workaround is to re-create your ASGs and CPs on every run:
|
We were using ASG with name_prefix and then used that name for CP. It works fine till you want to assign CP to the ECS services. Destroying CP means it needs to be unassigned from every service first. |
Using a random string suffix does not work for the terraform ecs module since the capacity provider's name is a key value. If you try to make the key a dynamic string you get the following error on apply:
Then back to the destroying/timeout death loop... |
This issue is still present. I feel like there is too many cases where the AWS provider simply assumes things are going to work on AWS's end whilst they clearly don't. I'm not going to stray for the capacity provider topic but this is a rampant issue in the AWS provider. |
Community Note
Issue description
When I have deployed an autoscaling group and an ecs capacity provider, any change that requires a replacement of the capacity provider fails and times out. This appears to have been an issue that was resolved with 2.67, per the issue at #11286
When I use this version, however, I still appear to be unable to destroy a capacity provider.
Manually destroying the provider and running an apply again seems to be a decent workaround.
Terraform CLI and Terraform AWS Provider Version
Terraform version: 0.12.26
AWS provider version: 2.67
Affected Resource(s)
Terraform Configuration Files
Panic Output
Error: error waiting for ECS Capacity Provider (arn:aws:ecs:us-east-1:xxxxxxxxx:capacity-provider/alphaEC2Provider-w1tJAnjjax5r0kr9) to delete: timeout while waiting for state to become 'INACTIVE' (last state: 'ACTIVE', timeout: 20m0s)
Expected Behavior
The capacity provider should be replaced as the
plan
output suggests.Actual Behavior
An apply times out.
Steps to Reproduce
terraform apply
terraform apply
The text was updated successfully, but these errors were encountered: