-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deleting a cluster leaves leftover LB #487
Comments
I think this is already covered by #103, which we certainly should priorities soon. |
It seems like it is. Tying deleting the ELB and maybe the security groups to deleting the whole VPC seems to only serve to delay implementing these features though. Issue #103 being open since July last year points in that direction too. What's your take on implementing these, and then looking into the more thorny issue afterwards? |
Sorry, not so clear what are you suggesting exactly, seems like there a typo and I cannot guess what you meant in this case.. :) |
Let me rephrase :) Issue 103 mentions deleting the whole VPC stack. This mean deleting Security Groups, Load Balancers, and then the VPC including all routing rules etc. This is something we have to be careful about, since there might be other stuff in that VPC that users don't want deleted. So there are various options under discussion for dealing with that. But it seems to me like deleting the eksctl-specific SGs and LBs is neither controversial nor difficult or dangerous. So my suggestion is to implement that first and then let the discussion around the rest of the VPC go on for as long as necessary. |
Yes, but it assumes that the "VPC stack" was created by us. Which used to be a separate CloudFormation stack, but later became part of "cluster stack".
Yes, that's understood, but we would be looking to only delete things in VPC that is fully managed by us, and only the things that look like what Kubernetes may have created, not things that user may have created. We will need to make it all safe too, e.g. two-step process with
Yes, exactly. I don't think anything beyond that was ever suggested, but we do want to look into wether it's just SG+LB, or there are other types of resources we may need to take care of (perhaps storage volume is the only other thing). |
Sounds good. My suggestion boils down to "let's do what we already know about first, and then let's look into possible further concerns." Closing this though, as it is a duplicate. Thanks for taking the time. |
Thanks, that totally makes sense. Let me know if you are interested to help
with this, or any other issues :)
…On Wed, 30 Jan 2019, 2:20 pm Zefool ***@***.*** wrote:
Closed #487 <#487>.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#487 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAPWS_wFunNLdHga8jntQRsQvh6m4Qn3ks5vIaoXgaJpZM4aYoEJ>
.
|
What happened?
I saw some leftover LBs in a cloud-nuked AWS Account after testing eksctl. These were also tagged accordingly. All EKS clusters were deleted through eksctl delete [...]
What you expected to happen?
No resources to be left over that could incur unexpected costs.
How to reproduce it?
Create a cluster in a fresh account, delete it afterwards, take a look at the EC2 Tab.
Versions
Please paste in the output of these commands:
The text was updated successfully, but these errors were encountered: