-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k9s config.yaml does not persist config for multiple k8s clusters #1359
Comments
@gazpwc Thank you for reporting this! The current design tries to sanitize the context configurations to ensure unused contexts are wiped. Thus when you specify a different kubeconfig the sanitization process is triggered and any invalid contexts are cleaned up. So this is by design. If you wish to preserve the k9s cluster state you should be able to leverage KUBECONFIG env var to specify the various kube configs you need to connect to and your k9s context configs would be preserved. Does this make any sense? |
Hi @derailed , thanks for getting back to me. This might work, but for my way of working it's quite inconvenient to modify the KUBECONFIG env var each time I switch to a different cluster, since I need to do that a couple of times per day or even hour. That's why I opened the first issue #1247 based on my findings with kubie (https://github.com/sbstp/kubie). But if if the cleanup process is implemented intentionally, is it possible to make that optional? I could perfectly live with some invalid contexts in my k9s config file. For me, that would be the the smallest inconvenience by far compared to the others. 😃 |
I came here to second this - I have many KUBECONFIGs for lots of various environments, and I hate with a passion losing my favorite namespaces, nodeShell settings, etc. on every switch (happens multiple times a day). Edit to add: as @derailed mentioned, I already use a method to pass dynamically the single KUBECONFIG I need to my current shell, and no more than one at a time. I purposely want to keep those separated in order not to mix even accidentally cluster access. |
I'd like to second this feature as well. This would make working with different clusters and many namespaces even more convenient.
So maybe it would be sufficient if there's an option to specify the config path, e.g.
|
Hi @derailed, since it seems that I am not alone with that wish, do you see any chance to make that sanitization process optional in one of the future releases? That would hugely enhance my user experience. |
@derailed any updates on this issue? We use I understand that this is "expected behavior" here, but a lot of people use I would also be happy to look into submitting a PR for this if you don't mind pointing me in the right direction for where this behavior is in the codebase, I just noticed a long period of radio silence on this issue so was hoping to re-engage it. |
* 'master' of github.com:derailed/k9s: (130 commits) added flux suspended resources retrieval plugin (derailed#1584) Provide white blur so images work in dark modes (derailed#1597) Add context to get-all (derailed#1701) fix brew command in the readme (derailed#2012) Add support for using custom kubeconfig with log_full plugin (derailed#2014) feat: allow for multiple plugin files in $XDG_DATA_DIRS/k9s/plugins (derailed#2029) Clean up issues introduced by derailed#2125 (derailed#2289) Pod view resembles more the output of kubectl get pods -o wide (derailed#2125) Update README.md with snap install (derailed#2262) Add snapcraft config (derailed#2123) storageclasses view keeps the same output as kubectl get sc (derailed#2132) Fix merge issues with PR derailed#2168 (derailed#2288) Add colour config for container picker (derailed#2140) Add env var to disable node pod counts (derailed#2168) Use current k9s NS if new context has no default NS (derailed#2197) Bump actions/setup-go from 4.0.1 to 4.1.0 (derailed#2200) fix: trigger a single log refresh after changing 'since' (derailed#2202) Add crossplane plugin (derailed#2204) fix(derailed#1359): add option to keep missing clusters in config (derailed#2213) K9s release v0.28.2 ...
I also encountered a similar issue with the latest version of k9s, when skins were moved to the config for each cluster :-) Now, after my settings, everything gets constantly overwritten - #2336 This parameter helped me - |
definite +1 here as well. another option would be to support specifying the k9s config itself on the cli. i don't believe that is an option at this time. |
Thank you all for piping in! |
…railed#2213) Co-authored-by: Clément Loiselet <clement.loiselet@cbp-group.com>
…railed#2213) Co-authored-by: Clément Loiselet <clement.loiselet@cbp-group.com>
Describe the bug
The k9s config.yaml contains a section
clusters:
which can store different configurations for individual clusters (at least for my understanding). Unfortunately, this config gets overwritten whenever k9s is used to connect to another cluster. It does always only contain the settings for the currentCluster.I already described the behaviour in #1247 when using kubie but it is also reproducible with native k9s.
To Reproduce
Steps to reproduce the behavior:
k9s --kubeconfig ~/.kube/configA --context contextA
k9s --kubeconfig ~/.kube/configB --context contextB
Expected behavior
I would expect that k9s is able to store configurations (like last used namespace, view, ...) for multiple clusters and is able to persist that in the config.yaml also after switching from one cluster to another.
Versions (please complete the following information):
The text was updated successfully, but these errors were encountered: