-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
delete VPC resources #103
Comments
Defaulting to this sounds dangerous when some consumers may be leveraging pre-existing VPC's |
@JordanFaust indeed, full cleanup would be only done in the cases where VPC was created by us. Also, this is really a broad issue at this point – so for example deleting all cluster resource before delete the cluster itself could be sufficient in many cases. |
maybe prompt the user or have a Its currently a huge PITA deleting clusters as usually there's something (usually a Load Balancer! sometimes other things...) causing the VPC to not get deleted resulting in lots of fun with the AWS console |
James, yes indeed, we would want a flag to control this, but I am leaning
towards having it should enabled by default (if VPC was one created by us).
…On Mon, 23 Jul 2018, 11:22 am James Strachan, ***@***.***> wrote:
maybe prompt the user or have a --all CLI option to delete everything?
Its currently a huge PITA deleting clusters as usually there's something
(usually a Load Balancer! sometimes other things...) causing the VPC to not
get deleted resulting in lots of fun with the AWS console
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#103 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS0H5LpMkW-buZXExByhXsyWX7mlnks5uJaPmgaJpZM4VHSs8>
.
|
This is of interest to me as I'm working with an app that creates k8s LoadBalancer services on demand. So at the moment, it's not possible for me to cleanly delete the clusters I create with eksctl. I think it makes sense for 'delete cluster' to automatically clean up the ELBs that were created with the eksctl created VPC. |
I think at present you should be able to delete the services first, then
wait for GC to kick in, but I do think this that is rather awkward and
waiting time may cost you.
…On Sun, 16 Sep 2018, 9:51 pm Paul B Schroeder, ***@***.***> wrote:
This is of interest to me as I'm working with an app that creates k8s
LoadBalancer services on demand. So at the moment, it's not possible for me
to cleanly delete the clusters I create with eksctl. I think it makes sense
for 'delete cluster' to automatically clean up the ELBs that were created
with the eksctl created VPC.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#103 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS31nGsFR7YHuVYW51XIjCzwI5OAjks5ubrm1gaJpZM4VHSs8>
.
|
I have been deleting the ELBs. The VPC delete then gets hung up on the related security groups. So it seems to require deleting those manually also. But yes. It is a bit painful as I'm trying to automate as much as possible. |
Paul, have you tried deleting service that own the ELB and waiting for some
time? I am pretty sure GC should kick in, but I just don't know the period
it runs at (will need to check).
…On Mon, 17 Sep 2018 at 18:10 Paul B Schroeder ***@***.***> wrote:
I have been deleting the ELBs. The VPC delete then gets hung up on the
related security groups. So it seems to require deleting those manually
also.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#103 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS2YIpmqthRT6llsWsLwo_l43NMmiks5ub9eXgaJpZM4VHSs8>
.
|
I have done that. The GC hasn't kicked in as I have seen. At least it didn't do so in a short enough time frame. |
General info about handling stack deletion failures: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails. See also #523 (comment). |
Any progress on this? Specifically on the ELB part? |
We have laid some ground work for this, there is now logic that deletes
stale ENIs, but more work is needed in this area before we can extend the
functionality.
…On Tue, 19 Feb 2019, 12:52 pm Felipe ***@***.*** wrote:
Any progress on this? Specifically on the ELB part?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#103 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS9qrIw-tpT-MergG-WmJm9WbwZ6Iks5vO_OfgaJpZM4VHSs8>
.
|
Hi @errordeveloper - is there any update ? 10x |
Hey @errordeveloper, any news on this issue? |
I did a simple test, and it looks ELBs certainly get deleted right away when a service is downgraded to
Deleting the service also appears to delete ELB right away:
I do recall this didn't work as simply before, perhaps it's something that got fix recently. I used 1.13 for my test cluster, this issue was open when 1.10 was the only version EKS shipped and @paulbsch comments probably relates to 1.10 also. Perhaps we should try with 1.10, 1.11 and 1.12, and see where it's been solved. In any case, we should attempt deleting service before deleting clusters. Even if user didn't delete their workloads and services before deleting a cluster, we should provide them with clean deletion path. |
TL;DR: the code tells me the controller should always start the deletion of
an ELB mapped to a service within 30 seconds of the deletion of such
service (without needing to downgrade or modifying the service)
After reading the the release-1.13 code of
k8s.io/kubernetes/pkg/controller/service/service_controller.go , the
controller uses a 30-second period in its informer. So, the controller
should notice a service deletion in max 30 seconds.
Then, upon detecting a deletion, the controller calls
`EnsureLoadBalancerDeleted`, which (if EKS runs the upstream code at
k8s.io/pkg/cloudprovider/providers/aws/aws_loadbalancer.go ) should start
the deletion right away.
BTW, after reading `EnsureLoadBalancerDeleted` has taught me that it's
complicated enough not to try to replicate it ourselves.
So, if empirical evidence confirms what I have read I still think that
deleting the service and waiting for the mapped ELBs to disappear is our
best option.
PS: sorry for the lack of links, I am airborne and my Flight's internet
connection doesn't like GitHub.
…On Fri, Jul 5, 2019 at 4:06 PM Ilya Dmitrichenko ***@***.***> wrote:
I did a simple test, and it looks ELBs certainly get deleted right away
when a service is downgraded to type: ClusterIP (which is not always
trivial, for example, I had to also clear nodePort).
$ kubectl describe service test
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 10m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 10m service-controller Ensured load balancer
Normal Type 25s service-controller LoadBalancer -> ClusterIP
Normal DeletingLoadBalancer 25s service-controller Deleting load balancer
Normal DeletedLoadBalancer 14s service-controller Deleted load balancer
$ aws elb describe-load-balancers --region=us-west-2 --load-balancer-names=a06ca65f69f2c11e9abda02adb46d809
An error occurred (LoadBalancerNotFound) when calling the DescribeLoadBalancers operation: There is no ACTIVE Load Balancer named 'a06ca65f69f2c11e9abda02adb46d809'
$
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#103>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AASA4JDQA5WVE7V4MPVFG7DP55IOPANCNFSM4FI5FM6A>
.
|
I have created a cluster with new VPC (following to https://eksctl.io/usage/creating-and-managing-clusters/) So far everything deleted including VPC it created (even though it reflects not right-away), is it normal behavior? I'm just hoping when I apply eksctl to my production environment with my existing VPC, it won't wipe out my existing VPC. Is there any place where I can look-up for best practices integrating eksctl to a an existing production environment step-by-step ? |
@rusyasoft It only deletes VPCs that it created. |
@michaelbeaumont Maybe that should be added to eksctl documentation? |
Update sample manifest files to use alpha image
Currently deleting VPC stack will fail when there resources such as ELBs. We should be able to delete these, the questions is whether we should do it by default when we own the VPC or not?
The text was updated successfully, but these errors were encountered: