-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: add consolidation policies rfc #1433
base: main
Are you sure you want to change the base?
docs: add consolidation policies rfc #1433
Conversation
Signed-off-by: Cameron McAvoy <cmcavoy@indeed.com>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: cnmcavoy The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Pull Request Test Coverage Report for Build 12381550023Details
💛 - Coveralls |
0d420d0
to
92bbcf2
Compare
Signed-off-by: Cameron McAvoy <cmcavoy@indeed.com>
92bbcf2
to
592cb39
Compare
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Sorry to have let this stale out, now that /reopen |
@jmdeal: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
|
||
Karpenter provides 3 forms of consolidation: single-node, multi-node, and emptiness consolidation. Single-node replaces expensive nodes with cheaper nodes that satisfy the workloads requirements. Multi-node replaces multiple nodes with a single, larger node that can satisfy workload requirements. Emptiness removes nodes that have no workloads that need to be relocated to reduce costs. | ||
|
||
Customers want to have control over the types of consolidation that can occur within a nodepool. Some nodepools may only be used for job type workloads (crons, jobs, spark, etc) where single-node consolidation is dangerous and wastes execution when it disrupts pods. Other nodepools may be used mostly by long-running daemons, where daemonset costs are more significant and multi-node consolidation to binpack into larger nodes would be preferred. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, multi and single node are just collapsed into the same type of consolidation, where users don't need to think about the fact that there are two concepts for consolidation.
|
||
Customers want to have control over the types of consolidation that can occur within a nodepool. Some nodepools may only be used for job type workloads (crons, jobs, spark, etc) where single-node consolidation is dangerous and wastes execution when it disrupts pods. Other nodepools may be used mostly by long-running daemons, where daemonset costs are more significant and multi-node consolidation to binpack into larger nodes would be preferred. | ||
|
||
Karpenter's consolidation considers the price of instances when deciding to consolidate, but does not consider the cost of disruption. This can harm cluster stability, if the price improvement of the node is small, or the workloads disrupted are very costly to rebuild to their previous running state (e.g. long-running job that must restart from scratch). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One suggestion I'd make is that we estimate the cost of disruption, and there are some (albeit lesser used) knobs that a user can use to tweak this in their favor.
### Option 1: Add price thresholds, merge consolidation controllers | ||
The motivation for the additional controls is the high cost of disruption of certain workloads for small or negative cost savings. So instead of offering controls, the proposal is to add price improvement thresholds for consolidation, to avoid this scenario. | ||
|
||
The proposal is to find a price improvement threshold which accurate represents the "cost" of disrupting the workloads. This threshold could be an arbitrary-fixed value (e.g. 20% price improvement), or heuristically computed by Karpenter based on the cluster shape (e.g. nodepool ttl's might be a heuristic input). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, what do you think a reasonable default could be for this?
|
||
The proposal is to find a price improvement threshold which accurate represents the "cost" of disrupting the workloads. This threshold could be an arbitrary-fixed value (e.g. 20% price improvement), or heuristically computed by Karpenter based on the cluster shape (e.g. nodepool ttl's might be a heuristic input). | ||
|
||
The separation of the single-node and multi-node consolidation is arbitrary. The single-node consolidation finds the "least costly" eligible node to disrupt, and uses that as a solution, even if it's a poor solution. However, the least-costly eligible node to disrupt is not necessarily the most cost-savings node to disrupt, but the single-node consolidation stops after it has any solution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but the single-node consolidation stops after it has any solution.
This is the main difference from multi-node today. Multi-node doesn't (and can't) go through all the potential options that single-node can, so it finds edge cases of cost savings that multi-node doesn't
|
||
The separation of the single-node and multi-node consolidation is arbitrary. The single-node consolidation finds the "least costly" eligible node to disrupt, and uses that as a solution, even if it's a poor solution. However, the least-costly eligible node to disrupt is not necessarily the most cost-savings node to disrupt, but the single-node consolidation stops after it has any solution. | ||
|
||
In contrast, the multi-node consolidation spends as much time as possible to find a solution. So while price thresholds could be implemented into each consolidation controller, it is simpler to consider merging their responsabilities into one shared controller based on multi-node consolidation. This combined consolidation controller could search as long as possible for the most effective consolidation outcome, from emptiness, multi-node or single-node replacements. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multi-node has a timeout of 1 minute: https://github.com/kubernetes-sigs/karpenter/blob/main/pkg/controllers/disruption/multinodeconsolidation.go#L35, single node has 3 minutes.
In contrast, the multi-node consolidation spends as much time as possible to find a solution. So while price thresholds could be implemented into each consolidation controller, it is simpler to consider merging their responsabilities into one shared controller based on multi-node consolidation. This combined consolidation controller could search as long as possible for the most effective consolidation outcome, from emptiness, multi-node or single-node replacements. | |
So while price thresholds could be implemented into each consolidation controller, it is simpler to consider merging their responsabilities into one shared controller based on multi-node consolidation. This combined consolidation controller could search as long as possible for the most effective consolidation outcome, from emptiness, multi-node or single-node replacements. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This combined consolidation controller could search as long as possible for the most effective consolidation outcome
We've seen this break down at larger clusters, with complex scheduling constraints, for instance preferred anti affinity, where Karpenter has to try to schedule pods multiple times, once with the constraints treated as a hard-constraint, and once as a soft-constraint. We may be able to achieve a performance benefit here through hardening our story on preferences though.
* 👎 High amount of engineering effort to implement. | ||
|
||
### Option 2: Refactor and Expand Consolidation Policies | ||
Another approach would be to not make significant changes to consolidation, but expose controls to allow enabling or disabling various consolidation controllers. Karpenter already presents the binary consolidation options. The `NodePool` resource has a disruption field that exposes the `consolidationPolicy` field. This field is an enum with two supported values: `WhenEmpty` and `WhenUnderutilized`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slightly updated since
Another approach would be to not make significant changes to consolidation, but expose controls to allow enabling or disabling various consolidation controllers. Karpenter already presents the binary consolidation options. The `NodePool` resource has a disruption field that exposes the `consolidationPolicy` field. This field is an enum with two supported values: `WhenEmpty` and `WhenUnderutilized`. | |
Another approach would be to not make significant changes to consolidation, but expose controls to allow enabling or disabling various consolidation controllers. Karpenter already presents the binary consolidation options. The `NodePool` resource has a disruption field that exposes the `consolidationPolicy` field. This field is an enum with two supported values: `WhenEmpty` and `WhenEmptyOrUnderutilized`. |
And change the semantics of the existing consolidation policy enum value: | ||
|
||
* `WhenUnderutilized` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing the semantic of the existing consolidation policy to resolve to something different is a breaking change, and could definitely surprise people who upgrade without changing their values to the right one. If we're going this route, I'd prefer we keep the existing enum val to mean the same thing, and add another one to portray the mode you're proposing
|
||
The semantics of each of the enum values matches their names, making it straightforward to explain to customers or users. `WhenUnderutilizedOrCheaper` becomes the new default. `WhenUnderutilizedOrCheaper` is the same as `WhenUnderutilized` previously. `WhenUnderutilized` semantics changes to only allow emptiness or multi-node consolidation to occur for a nodepool. `WhenCheaper` only allows emptiness or single-node consolidation to occur for a nodepool. | ||
|
||
#### Pros/Cons |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another con: more effort to understand the three different modes for users
* 👍 Fine-grained controls for the various consolidation controllers. | ||
* 👎 Complex configuration for the `NodePool` crd | ||
* 👎 Complex implementation for the various consolidation controllers. | ||
* 👎 Limits Karpenter's ability to change / remove existing consolidation controllers, as config will be tightly coupled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pretty large downside for me
* 👍 Simple implementation for the various consolidation controllers. | ||
* 👎 If Karpenter adds consolidation modes with new purposes, more `consolidationPolicy` enum values will be needed to make them toggleable. | ||
|
||
### Option 3: Add Consolidation Configuration to NodePools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if we can do this in a backwards compatible way that wouldn't require a v2 to the API.
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Previously on 9/26 we had discussed splitting the problem statement into a separate PR so we could gather consensus and confirm we agree on the problem space. I apologize for taking so long, but I finally opened #1746 |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Fixes #N/A
Description
RFC for #1429 and #1430
Let me know what topics or areas of interest need to be included or weren't fully covered. First time writing one of these.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.