-
Notifications
You must be signed in to change notification settings - Fork 808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Add ability to customize node daemonset nodeselector #647
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @pliu! |
Hi @pliu. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Pull Request Test Coverage Report for Build 1385
💛 - Coveralls |
Thank you! Will take a look later. /ok-to-test |
Can you bump the chart version to 0.7.1? |
@ayberk done :) |
/lgtm Thank you! note to self: we should ideally deploy only to aws nodes by default... |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ayberk, pliu The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I imagine that would be a bit difficult since you'd need a Kubernetes-level label-equivalent (or just a label), that specifies which nodes are backed by AWS hosts. Currently, we're using a custom label to specify which nodes are backed by AWS hosts, but that label would need to be standardized if we wanted to make this default behavior. Alternatively one could run it on every node and just be a no-op on non-AWS-host-backed nodes (though the argument could be made that that is unacceptable overhead). |
Yeah we need a well-known label, which should be doable especially with the cloud-provider extraction. I'm not a big fan of no-op solution because it can be confusing. We have a workaround with this so it's a nice-to-have for later :) |
Oh nice. I'll look into this cloud provider extraction thing (if you have a link I can start with, that'd be appreciated :)). |
We're building a new version of cloud-provider-aws here, although it's in early stages: https://github.com/kubernetes/cloud-provider-aws#readme |
Is this a bug fix or adding new feature?
Feature
What is this PR about? / Why do we need it?
This allows users to customize the nodeSelector field of the daemonset deploying the ebs-csi-nodes the same way one can customize the nodeSelector of the deployment deploying the controllers. This is needed in the case of a hybrid cluster where some nodes are AWS hosts whereas others are not (the ebs-csi-nodes on the non-AWS hosts continuously crashloop).
What testing is done?
Verified that the Chart renders as expected