-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel drain in Cluster Autoscaler #5079
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale There are a few more changes that need adding, but hopefully initial implementation will land before 1.26 is cut. |
The new behavior can be enabled with |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale I need to change the flag to be enabled by default now. |
Which component are you using?:
cluster-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
Scale down of non-empty nodes taking very long time in large clusters.
Describe the solution you'd like.:
Parallel drain
Describe any alternative solutions you've considered.:
None really, the main factor contributing to slow scale down is the time to drain nodes one by one.
Additional context.:
Design was added in #4766
/assign
The text was updated successfully, but these errors were encountered: