-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EKS] [request]: Auto-apply node role labels securely #1381
Comments
@mikestef9 This proposal applies especially to nodes that are managed outside EKS, sometimes called unmanaged nodes. |
By means of a visual example, here's the node level view of a simple EKS cluster with 3 distinct nodegroups (db, logs, app) using Lens having the ability to set role labels to some meaningful value, at the time the instance is provisioned, will dramatically increase the usability of these tools; everything helps to reduce the time-to-resolution |
@adeatpolte here is my workaround, I don't think it's possible to set these in EKS in any other way: looks like this: |
I ran into this exact issue and ended up not using node roles with my EKS managed node groups. I didn't understand the reason why setting these role labels would break things. |
Just in case anyone's planning to try this, |
The biggest problem is that these unofficial labels are now kind-of a convention for multiple tools, including I understand there are security concerns around allowing Kubelet to set those labels, and the upstream community has decided to park it. For those willing to understand more about it, I'd recommend: kubernetes/kubernetes#75457. |
Underscores in the key for a label are unusual. If you're making a tool that interacts with labels, you can warn your users if they try to use a node role label that doesn't look right. |
Also, for clarity:
|
@sftim I agree 100% with you, and my point was to share a perspective of why so many people ask for this. |
Community Note
Idea
Since Kubernetes 1.16, the Kubelet is not allowed anymore to provide node labels by itself due to security reasons. However, with Cluster Autoscaler or with a maximum EC2 instance lifetime it is not feasible to label every node by hand. There is a discussion going on how to auto-label nodes based on tags, but I don't think this is the right approach, because it would re-introduce the security issue why this change was made.
To apply the node role label both automatically and securely, the node must provide an identity evidence – it's IAM identity – to a cluster component, which would perform the labelling. So here are a few ideas I have how this could be built:
Create a
DaemonSet
retrieves the node's IAM role and a custom resource to specify which node labels should be applied to that IAM role. Based on that information, the operator can perform the automatic labelling.Or you build that directly into the EKS control plane with a new IAM action like
eks:LabelNode
, which would be performed when the Kubelet starts up. As there is already some authentication mechanism in place when the node joins the cluster, it could be integrated there, too. Instead of creating a custom resource, theaws-auth
config map could be extended, too.Looking forward to type
kubectl get nodes
and to quickly identify my node roles again :-)❗ Update: This feature request applies especially to nodes that are managed outside EKS, sometimes called unmanaged nodes.
The text was updated successfully, but these errors were encountered: