-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Behavior change in daemonset pod eviction #5240
Comments
Any thoughts on this? |
We faced the same situation in GKE (1.24.6-gke.1500) with managed local dns service. Our services have slight delay while stopping, but local dns usually stops faster than some workloads and that leads to failed requests. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
v1.21.3
Component version: v1.21.3
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
AWS EKS
What did you expect to happen?:
Daemonset pods, especially those marked
system-node-critical
, to either not be evicted by default or to be evicted only after all other pods are evicted.What happened instead?:
Daemonset pods were evicted before other non-critical and non-daemonset pods.
How to reproduce it (as minimally and precisely as possible):
cluster-autoscaler.kubernetes.io/enable-ds-eviction
annotation and without--daemonset-eviction-for-occupied-nodes=false
.Anything else we need to know?:
I understand that
cluster-autoscaler.kubernetes.io/enable-ds-eviction
and--daemonset-eviction-for-occupied-nodes
were added to control daemonset pod eviction, but the default behavior on occupied nodes changed from "do not evict daemonset pods" to "evict daemonset pods unless annotation is present". This behavior change can lead to regressions and unexpected changes.If the default behavior cannot be changed back, it seems like daemonset pods, or even more specifically
system-node-critical
pods, should not be evicted until all other pods are evicted. Adding issue #4337 for referencing similar discussionThe text was updated successfully, but these errors were encountered: