-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runOnControlPlane/runOnMaster: true make the controller unschedulable #861
Comments
I see the control plane nodes have label These doesn't match by the way kubernetes matches labels to selectors. We probably want a simple selector of type "exists", without specifying a value: template:
spec:
affinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists References:
|
there are other k8s clusters that the master node has label
if you want to make this csi driver controller runs on the master node matching specific label, you could define |
Wouldn't we simple remove
Instead, we can add some examples to the doc or as comments in values.yaml on how to use |
Either that or we should mention it in the docs that it may not work for some implementations. |
I have fixed this issue by using affinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists |
@andyzhangx Thank you! Note that exactly the same problem exists with runOnMaster. So it is probably a good idea to add the same fix for that one. |
I think it is useful to mention in the docs and as a comment in the values.yaml that runOnControlPlane=true only has an effect if the user doesn't have an affinity-block in their values.yaml. |
@andyzhangx Super, thanks! |
What happened:
Setting controller.runOnControlPlane: true makes the controller unschedulable.
The same applies to controller.runOnMaster.
This is the same problem as kubernetes-csi/csi-driver-nfs#787.
What you expected to happen:
The controller to get scheduled
How to reproduce it:
Environment:
kubectl version
): v1.30.6+k3s1uname -a
): 6.11.5-2-defaultThe text was updated successfully, but these errors were encountered: