-
Notifications
You must be signed in to change notification settings - Fork 931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support $HOME/.kube/config.d/* #569
Comments
Can't you do the same by having multiple config files in arbitrary locations and listing them in the |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I'd be keen on this too.; If there are others who would like it then would be happy to put some time in for a PR |
@weibeld Sure, but that wouldn't be the standard way and will be hard to convince tool vendors to follow through, it is also not exactly simple or easy approach. |
Sure, a What do you mean by "not the standard way"? |
I think However, if What I mean by standard way is, asking a tool maintainer to support kubectl context is much more convincing than asking them to support some ad-hoc workflow. Hope that makes it clear :) |
Is kubectl-switch a specific kubectl plugin or command? In general, I wouldn't edit a kubeconfig file by hand, but use the |
By kubectl-switch, I meant |
@weibeld I would tend to automate using I have a few use cases for this, one is something like: https://github.com/dwmkerr/terraform-aws-openshift Where I'd like to run |
@dwmkerr It could indeed be more user-friendly to replace the In the end, both approaches allow to do the same, but the directory approach frees you from having to deal with environment variables (which can be accidentally set and unset easily). Currently, in your case, you would need to have your |
Cool, I've got time this week so will aim to do it then 😄 |
FYI I've created the design document and raised the feature request in the appropriate location (I think!) and mentioned this issue there, so I believe this issue can be closed now as it is linked to be the new feature request. |
I closed this issue under the impression that you have opened a pull request, now after some looking into this again, I am not sure that is the case. Is this issue not the right way to go about discussion whatever something is up for consideration or not? I don't understand why an issue related to |
Hi @omeid honest I found the guide for contributing quite confusing, I was following the instructions here: https://github.com/kubernetes/community/blob/master/sig-cli/CONTRIBUTING.md I followed the instructions which involved creating the design doc, opening an issue on staging etc, it's quite a complex process so far, and I'm not sure the issues are even being seen... |
@dwmkerr if you're not getting responses from sig-cli, they list ways you can escalate for attention: https://github.com/kubernetes/community/blob/master/sig-cli/CONTRIBUTING.md#general-escalation-instructions |
Hi @cblecker thanks for the tips! Just pinged on the group, I've tried slack with no joy, but let's see what the group says. Really appreciate the help! |
There's a lot of history here: And it would probably need to be respected by API client libraries, not just kubectl. |
/sig cli |
/sig api-machinery |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale Still want this feature. |
/lifecycle frozen |
/priority backlog |
Is this answer sufficient: https://stackoverflow.com/questions/46184125/how-to-merge-kubectl-config-file-with-kube-config/46184649#46184649 ? |
@pwittrock the problem with actively mutating a single file is that we have to do the locking to avoid race conditions If we could support a config.d and then a seperate file for just the current-context most cluster management tools would probably manage their own kubeconfig files without fear of racing with other tools. see also: kubernetes/kubernetes#92513 EDIT: racing on current-context is much less problematic than racing on the credentials etc. and possibly wiping out another cluster. |
What if we were to add a The challenges with supporting A possible solution to this would be to enable the feature with a flag Another solution could be to have the EDIT: I see the issue about EDIT 2: The support issue also impacts using different versions of kubectl. I know gcloud provides a dispatcher that matches the version of kubectl to a cluster. There would likely be issues if it read the WDYT? |
If we kept current-context specifically in the kubeconfig file specified by the existing rules then those tools would at worst be pointed to a nonexistent context and do nothing? (Since the context name would refer to one stored in some file they don't yet read)? Most tools get the (possibly broken) locking by importing client-go. A glob in KUBECONFIG env would be no less breaking that supporting a config.d? Tools would still need to be updated to read it for the problem scenario you outlined. |
For plugins, kubectl could pass a fully resolved KUBECONFIG list value / env based on the files discovered when calling them. Ditto for the dispatcher. |
This might be easier on VC |
Forgot to post back here: We spoke over VC and hashed it out, I intend to have a short doc ready to discuss at the next SIG CLI meeting. |
@BenTheElder Can you link me to the CLI meeting please? |
One challenge with multiple configs is that the config file currently stores two types of information:
If there are multiple configs, how will kubectl interpret the For example, consider a config file that looks like this:
To me, it seems like Anyway, this is something I've thought about in the past and thought it made sense to mention it here. I think kubectl will need to handle how |
Sorry this fell by the wayside, I have not gotten to any real development lately ... swamped in reviews etc. 😞 @brianpursley this was part of our thinking as well ... only the existing location should have current-context / preferences, potentially with some validation warning / error-ing if present in config.d 👍 |
@BenTheElder sorry I should have read back before I commented. It looks like the current context issue has already been brought up. This issue was mentioned today in the sig cli meeting and I wanted to add this concern, but it sounds like you guys already covered it. 👍 |
Also ideally, the other, non-state, config files should be restricted to 1 cluster per file to avoid any confusion and conflict in the future. |
I use this script to load all kube config 🤣
|
Any news from the various SIG meetings to share? This would be a killer improvement I've been hoping to see since 2019. |
We have discussed this in a few different shapes. This comment sums up the issue with a change like this.
|
Greetings!
kubectl already allows defining multiple clusters and users in
$HOME/.kube/config
however editing this file by hand or even by tools is a bit cumbersome.If
kubectl
supported loading multiple config files from$HOME/.kube/config.d/
it would make dealing with different cluster configurations much easier.For example,
kubespray already generates a config file, but it is still not easy to use it if you already have a config file setup (Create admin credential kubeconfig kubernetes-sigs/kubespray#1647).
aws eks cli already mutates the config file, but dealing with multiple clusters or updating this information requires way more mental overhead than it should require.
I would like to hear some feedback on this and whatever a pull request would be considered.
Many Thanks!
The text was updated successfully, but these errors were encountered: