-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node Labelling from AWS Tags #110
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Just want to add that this would be a huge help in the AWS-CNI bootstrapping process when you use custom networking (ENIConfig CRDs which are AZ dependent) to label the nodes to use the proper corresponding |
/assign |
I'm tentatively starting to look into this, but I'd appreciate some input before I dive deeper:
|
The 'role' label risk applies to all arbitrary labels, because any label might be the label used by the cluster admins to put sensitive workloads on tightly-controlled nodes. That's the rationale for removing this feature from kubelet in the first place, except for the So the equivalent security posture would be that all EC2 tag-sourced roles go under, e.g., Then for @chiefy's use-case, the |
FWIW kOps supports node labeling from tags using the tag prefix that cluster-autoscaler recognizes: |
Appreciate the input. That makes sense to me. As an additional check, how do you feel about hiding this behind a flag? It'd be false by default and the cluster admin would have to explicitly allow the tag labeling, forcing them to acknowledge this behavior. |
I think it would make sense to:
Supporting Cluster Autoscaler's Auto-Discovery tags is particularly nice, because right now I have to duplicate those values in eksctl, once as tags and again as labels/taints for the Nodegroup. eksctl-io/eksctl#1481 (comment) is a request to have eksctl auto-copy nodegroup labels and taints to the ASG tags, but it never actually advanced, probably because you can solve it manually in the config. All that said, I just realised that CA's tags are on the ASG, and are not required to propagate to the EC2 Instance for CA to work correctly. So to implement the I'm not actually sure if the AWS API lets you get an ASG from an EC2 Instance, so this might be painful to implement. In Managed Nodegroups, the strongly-related aws/containers-roadmap#724 is moving to teach Cluster Autoscaler to pull the list of labels and taints from the Managed Nodegroup API, because there is a limit of 50 tags on the ASG, although that's not a new problem for non-managed Nodegroups in Cluster Autoscaler. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Following up on my notes above, the third dot-point ("Optionally handle cluster-autoscaler's node-template tag namespace") wouldn't be necessary for Managed Nodegroups, as per aws/containers-roadmap#724 (comment), Managed Nodegroups will not be using AWS tags to feed labels/taints to Cluster Autoscaler, so perhaps this would be better to default to "off" after all, to avoid surprises, particularly surprise interactions with Managed Nodegroups. Then you have the choice between duplicating the tags into labels/taints locally, e.g. as I do now with eksctl, or turning on the automatic application of that tag namespace by cloud-privider-aws. |
/remove-lifecycle stale |
Since this came back on my radar again, I thought I'd mention that in my suggestion earlier, the fourth dot-point (Support (but not enable by default) other administrator-provided tag or tag-prefixes to honour.) has a high risk of introducing a security vulnerability, and it probably needs to provide, for example, an option to block using AWS Tags to mark a node as On EKS, I imagine it doesn't make sense to mark EC2 nodes as 'master', as EKS provides that. I haven't used non-EKS setups on EC2, so I'm not sure if it would make sense to allow this tagging there, or if it could just be default-blocked in all cases. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I just saw this one. It's an interesting topic. |
What would you like to be added:
I'd like the AWS Cloud Provider to be able to apply labels from Instance tags (probably inherited from ASG or Launch Template) to the Node at creation time.
Why is this needed:
Since k8s 1.16, kubelet is not allowed to set node labels (with specific exceptions) in *.kubernetes.io namespace. A specific instance of this are the role labels,
node-role.kubernetes.io/X=
andkubernetes.io/role=X
.Tools such as eksctl currently rely on the EC2 user_data to pass labels to kubelet to set as self-labels, so that the user-specified labels are available on nodes started from an AutoScalingGroup, and hence have no direct interaction with eksctl. Currently, it is forced to reject labels that kubelet cannot self-apply.
Per my comment on the eksctl issue, setting kubelet-disallowed labels on nodes from eksctl should not be blocked by this change, as this is a "user" action, not "kubelet" action.
It seems to me that the AWS Cloud Provider (Really the Node controller) is the right place to apply some other source of information i.e. AWS Tags, as used by, e.g, Cluster Autoscaler, in order to add labels to incoming Nodes.
That said, poking at CloudNodeController doesn't show an obvious place where arbitrary labels can be applied to an incoming Node, so this might not be as simple as I'd hoped, or even the right place to do this.
Per aws/containers-roadmap#733, EKS's Managed Node Groups feature is also blocked from applying kubelet-forbidden labels to its nodes. Separately, metal3.io have encountered the same issue, and are contemplating support in their CAPM3 controller to manage this, which seems the equivalent of the AWS Cloud Provider in a metal3 Cluster API rollout.
Separately, this would open a discussion about whether the node labels would be updated if their tags are changed while they are alive.
/kind feature
The text was updated successfully, but these errors were encountered: