Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-2804: Consolidate workload controllers status - introduce Operational condition #3524

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

atiratree
Copy link
Member

  • One-line PR description:
    Introduce a new Operational condition for workloads that should solve the pitfalls of the Available condition.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 19, 2022
@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label Sep 19, 2022
@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 19, 2022
@atiratree atiratree changed the title KEP-2804: Consolidate workload controllers status - rework Available condition KEP-2804: Consolidate workload controllers status - introduce Operational condition Sep 19, 2022
@atiratree atiratree force-pushed the consolidate-status branch 2 times, most recently from 777c4ed to 665aaf2 Compare September 19, 2022 15:56
Individual workload controllers mark a ReplicaSet or StatefulSet as `available` when number of available replicas reaches number of replicas.
- This could be confusing in ReplicaSets a bit since Deployment could get available sooner than its ReplicaSet due `maxUnavailable`.
- Available replicas is alpha feature guarded by StatefulSetMinReadySeconds gate in StatefulSets, but the value defaults to ReadyReplicas when the feature gate is disabled so using it shouldn't be an issue.
#### Operational
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this shouldn't be added by kubernetes.

From https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties

Objects may report multiple conditions, and new types of conditions may be added in the future or by 3rd party controllers.

It doesn't sound like this condition would be useful for a wide number of users. In particular, it's very hard to interpret for Jobs, as opposed to just looking at .status.active. But OTOH users are free to set it up based on their own policies.

Copy link
Member Author

@atiratree atiratree Sep 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this shouldn't be added by kubernetes.

Objects may report multiple conditions, and new types of conditions may be added in the future or by 3rd party controllers.

I am not sure, if this is a sufficient reason. Can you please be more specific in which area this would be problematic? I have looked into cloud native projects and have not found any such use case in context of this condition.

It doesn't sound like this condition would be useful for a wide number of users. In particular, it's very hard to interpret for Jobs, as opposed to just looking at .status.active. But OTOH users are free to set it up based on their own policies.

I am sorry this was poorly formulated in the beginning (it is hopefully better now). As described in the proposal the Operational condition will not be added into Jobs or CronJobs. https://github.com/atiratree/kube-enhancements/blob/consolidate-status/keps/sig-apps/2804-consolidate-workload-controllers-status/README.md#proposed-conditions I think it makes sense for the workloads where this is mentioned. Do you have some remarks to the workloads where it is proposed?

As we discussed before, we are not targeting Jobs and CronJobs at this time. And we would update later our evaluation what to do in that space.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My PoV is more general than just Jobs:

If Operational has a fixed policy (for example, all pods are available), it's unlikely to be useful to all users. I also think that having some spec fields to control when the Operational condition is added might be an overkill. I guess the question is what use case is this condition enabling.

What is wrong with just exposing the number of active of workloads and let users interpret that as they wish? Or write their own controllers to append a condition that is meaningful to them.

Copy link
Member Author

@atiratree atiratree Sep 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My PoV is more general than just Jobs:

If Operational has a fixed policy (for example, all pods are available), it's unlikely to be useful to all users. I also think that having some spec fields to control when the Operational condition is added might be an overkill.

I agree, as I suggested in the KEP we could first introduce the condition and once there is additional interest consider adding customization via the spec field

we can be more thorough with the feedback and wait for a longer time before graduating this to be sure we are committed to the right thing

I guess the question is what use case is this condition enabling.

this is mainly for better user experience , but yes for example operators can do without this feature and use the status fields

cons:

What is wrong with just exposing the number of active of workloads and let users interpret that as they wish? Or write their own controllers to append a condition that is meaningful to them.

  • inconsistency: workloads expose that via different fields (e.g DS vs STS)
  • not everybody knows the details of each status and this condition would give an interface for a common user
  • it is not really feasible for each user to implement their own controllers for this when you consider maintenance, RBAC permissions and collisions with other controllers. I think it makes sense for this to be implemented by a platform so everybody can benefit from this common functionality

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any controller that exposes an update mechanism that defines “availability” as a constraint (minReady, maxUnavailable) by definition is maintaining an Available condition. Since all three key workload controllers with update strategies expose those fields deliberately, I am not sure why we are stating “Available” is not consistent if added to those workload types. They were designed to be consistent. I’ll reread the proposed changes, but I’m a bit confused and hope I’m not missing something fundamental here.

Copy link
Contributor

@smarterclayton smarterclayton Oct 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workloads user cannot specify a different value after the rollout to tune it specifically for their application.

That is a not unreasonable requirement we could add to all workload controllers by introducing a field that modifies that... if that's really our problem. Controllers know the difference between rolling out a config change vs steady state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/kubernetes/kubernetes/blob/fce22c807f2851a26811332df9952a1826e6dcc1/pkg/controller/deployment/sync.go#L514

Has a fencepost bug.

	if availableReplicas >= *(deployment.Spec.Replicas)-deploymentutil.MaxUnavailable(*deployment) {

should be

        if availableReplicas > 0 && ...

You can't be available with zero pods. What user behavior would be dependent on this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened kubernetes/kubernetes#113228 - if there is a backwards compatibility issue we can discuss it there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, going through this I see the issue is that we are consistent across both, we just defined it in a way that makes sense for machines (apply the rule consistently) but not necessarily for humans (not having replicas is not available at a deployment level). I agree with the assessment that we cannot change the behavior of the condition because of that consistency.

Given that, we SHOULD still implement Available consistently across all controllers. Being inconsistent between types is just as bad as being inconsistent within a given type (if available was reported differently during update and steady state) - we prioritize consistency of the system over net new features.

For that reason I am against introducing a net new condition without first (or at the same time) making Available consistent across the standard types which have rolling upgrade and minimum availability defined. Will make that comment in the other places.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, as I soon as I type this I realize that Recreate deployments do go unavailable during update, which doesn't make sense. The presence of the Recreate strategy could be interpreted to be identical to replicas=1,rolling,maxUnavailable=25%(default) - minimum availability of a Recreate deployment is 0, not 100%. Will need to ponder that tomorrow.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 19, 2022
This enhancement extends the scope of those and other conditions to other controllers (DaemonSet, Job, ReplicaSet & ReplicationController, StatefulSet).
This enhancement extends the scope of those and other conditions to other controllers (DaemonSet, Job, ReplicaSet & ReplicationController, StatefulSet, Deployment) where applicable.

These conditions can be used by high-level controllers, tools, and end users in an effort to hide specifics of each workload
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have different workload APIs because they behave very different, so I don't think we can necessarily generalize conditions.

In particular, I think we should give up on generalizing "Available" or "Operational", as you are suggesting.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of them behave different, but we try to suggest only the workloads which have common characteristics.

  • Deployment, StatefulSet, DaemonSet: all of them have a rollout dynamic in common, and would benefit from Progressing condition to know the state of a rollout
  • ReplicaSet, Deployment, StatefulSet, DaemonSet: the responsibility of these controllers is that they should heal the pods and make sure the user application has enough pods for it to be healthy: I think Opeartional fits this use case

| Deployment | - | - | X | X | A | X | - | - | - |
| StatefulSet | - | - | A | - | A | - | - | - | - |
| DaemonSet | - | - | A | - | A | - | - | - | - |
| Job | A | A | - | - | - | - | X | X | X |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we introducing conditions for Job that no other workload has? It seems to go against the motivation of the KEP.

Copy link
Member Author

@atiratree atiratree Sep 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sorry, but the batch part needs an overhaul. Nevertheless the mission of this KEP is to examine all existing conditions and try to identify new beneficial ones. But only where applicable, so we can end up only with a single condition per workload.

An advantage of this "bird view" of all condition is that we do not introduce any overlap and inconsistency


DaemonSet controller marks DaemonSet as `available` when `numberUnavailable` or `desiredNumberScheduled - numberAvailable` is zero.
DaemonSet controller marks DaemonSet as `Operational` when `numberAvailable >= desiredNumberScheduled - threshold`.

#### Batch Workloads Conditions: Waiting & Running
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I was not involved in the previous version of the KEP, but I'm not finding much value in these conditions.

Did you consider just adding one condition named "Ready" with a value equal to .status.ready>0?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think available replicas is a bit more useful, since user can utilize spec.minReadySeconds which adds more tuning for a application to be considered healthy. They can also choose to set it to 0 to consider only the ready part.

I am not trying to tie together the batch workloads as well since their use case is quite a bit different. Do you have some special use case in mind with regards to the Ready condition?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For service workloads, we want to have the highest level (captures the most actual end user impact) condition possible that allows improvements to observed outcome tuning (new fields) to show up in the value. Ready would probably be too low level because it would not capture minReadySeconds. I think to fully decide what we are adding we have to be very precise about the use cases the conditions solve and the end user automation problems the conditions enable.

| StatefulSet | - | - | A | - | A | - | - | - | - |
| DaemonSet | - | - | A | - | A | - | - | - | - |
| Job | A | A | - | - | - | - | X | X | X |
| CronJob | - | - | - | - | - | - | - | - | - |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suspended might be useful in CronJob

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned, I am trying to leave out the batch workloads from this KEP update. But thanks for pointing this out, I will take it to consideration when we focus on the batch. Nevertheless, feel free to open an update on the batch part of this KEP if you want to give this a priority sooner.

- analyze Available condition and problems with introducing this to
  other workloads
- instead introduce a new Operational condition for workloads that
  should solve the pitfalls of the Available condition
The semantic meaning of `Available` condition is that the deployment is healthy and operating at sufficient capacity(number of replicas).

Introducing this condition as is, to other workloads is not straightforward:
- ReplicaSet & ReplicationController: To introduce this in ReplicaSet we would have to mark `status=True` in Available condition when all the replicas are Available since there is no rollout mechanic in ReplicaSets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Available condition only has meaning on types that define availability, ReplicaSet and RC do not define availability, therefore they do not need to be consistent for either Progressing or Available. They should be consistent on ReplicaFailure.

@@ -113,9 +122,14 @@ Workload controllers should have above conditions (when applicable) to reflect t
As an end-user of Kubernetes, I'd like all my workload controllers to have consistent statuses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not really a story per se - I'd remove this. I'd start with a story about

As a user deploying workloads to Kubernetes, I would like to know how to correctly wait (via observation, CLI, or client code) for a newly created or recently update deployment, statefulset, or daemonset to be rolled out and at the correct version.

(which will require a combo of conditions, describing observed generation / generation, and nuances of each controller)

I would also change story 2 to be:

As a developer building workload orchestration (operators, gitops, higher level workload types) above Kubernetes, I would like to know generically when an update to a service workload is fully rolled out and safe to declare as successful, and know what guidance to give workload creators around common failures

(that implies the Available condition or a replacement as above, as well as explaining Progressing and the progress deadlines)

For 3 and 4 I think of those as variants of the first two stories, but the goal isn't "unified statuses", the goal is "each class of workload controller has a reasonably consistent process". Many of the questions asked here should be presented in the KEP as questions an end user might have:

  • What happens if I try to wait for Available on a paused controller?
  • Should pausing cause progressing to be false?
  • What intermittent errors should be surfaced in the conditions and how do we avoid clients having to add more and more tests for intermittent errors (answer: by having the condition abstract them)?

We should also formally describe the class of workloads in the KEP (as definitions) - service and batch are two starts, but we need to do a quick pass to determine what other workload classes might have need of this (by reviewing examples from the extension ecosystem).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to review the non-goals once the previous comment is addressed, but to propose a new condition we have to set overall objectives. The first and foremost is that we understand the full process (how a user waits for a desired outcome, and how we define that outcome). Once we have that, we then need to assess how to get there, and the KEP needs to reason from current state and then discuss the choices. If Available is broken, then we have to figure out how it is currently useful to that flow (and what users may be expecting it to work and not getting the outcome they want), and capture that utility in the new outcome.

The first two non-goals I think are wrong - backward API compatibility is assumed and so we don't need to state it (or if we do, it needs to be very explicitly described in alternatives as why we chose not to break it).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the review, will try to address it next week

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 25, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@atiratree
Copy link
Member Author

still relevant: will revisit after kubernetes/kubernetes#113228 and kubernetes/website#39892 is decided upon
/reopen

@k8s-ci-robot k8s-ci-robot reopened this Apr 3, 2023
@k8s-ci-robot
Copy link
Contributor

@atiratree: Reopened this PR.

In response to this:

still relevant: will revisit after kubernetes/kubernetes#113228 and kubernetes/website#39892 is decided upon
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: atiratree
Once this PR has been reviewed and has the lgtm label, please assign janetkuo for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@atiratree
Copy link
Member Author

#3524 (comment) still applies
/remove-lifecycle rotten
/reopen

@k8s-ci-robot k8s-ci-robot reopened this May 15, 2023
@k8s-ci-robot
Copy link
Contributor

@atiratree: Reopened this PR.

In response to this:

#3524 (comment) still applies
/remove-lifecycle rotten
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 15, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 18, 2024
@atiratree
Copy link
Member Author

/remove-lifecycle rotten
There is still an interest of solving this for the workloads: kubernetes/kubernetes#79606

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2024
@tenzen-y
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
Status: Needs Triage
Development

Successfully merging this pull request may close these issues.

6 participants