Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k9s config.yaml does not persist config for multiple k8s clusters #1359

Closed
gazpwc opened this issue Dec 7, 2021 · 9 comments
Closed

k9s config.yaml does not persist config for multiple k8s clusters #1359

gazpwc opened this issue Dec 7, 2021 · 9 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@gazpwc
Copy link

gazpwc commented Dec 7, 2021




Describe the bug
The k9s config.yaml contains a section clusters: which can store different configurations for individual clusters (at least for my understanding). Unfortunately, this config gets overwritten whenever k9s is used to connect to another cluster. It does always only contain the settings for the currentCluster.

I already described the behaviour in #1247 when using kubie but it is also reproducible with native k9s.

To Reproduce
Steps to reproduce the behavior:

  1. Start k9s using k9s --kubeconfig ~/.kube/configA --context contextA
  2. Close k9s
  3. Check config.yaml (on Mac /Users/me/Library/Application Support/k9s/config.yml
  4. Find section:
k9s:
  clusters:
    clusterA:
.
.
  1. Start k9s using k9s --kubeconfig ~/.kube/configB --context contextB
  2. Close k9s
  3. Check config yaml (on Mac /Users/me/Library/Application Support/k9s/config.yml
  4. Find section:
k9s:
  clusters:
    clusterB:
.
.
  1. But also see that the settings for ConfigA no longer exists and got removed.

Expected behavior
I would expect that k9s is able to store configurations (like last used namespace, view, ...) for multiple clusters and is able to persist that in the config.yaml also after switching from one cluster to another.

Versions (please complete the following information):

  • OS: macOS 12.0.1
  • K9s: 0.25.8
  • K8s: 1.20.9
@derailed derailed added enhancement New feature or request AsDesigned Works as designed question Further information is requested and removed enhancement New feature or request labels Dec 10, 2021
@derailed
Copy link
Owner

@gazpwc Thank you for reporting this! The current design tries to sanitize the context configurations to ensure unused contexts are wiped. Thus when you specify a different kubeconfig the sanitization process is triggered and any invalid contexts are cleaned up. So this is by design. If you wish to preserve the k9s cluster state you should be able to leverage KUBECONFIG env var to specify the various kube configs you need to connect to and your k9s context configs would be preserved. Does this make any sense?

@gazpwc
Copy link
Author

gazpwc commented Dec 13, 2021

Hi @derailed ,

thanks for getting back to me.

This might work, but for my way of working it's quite inconvenient to modify the KUBECONFIG env var each time I switch to a different cluster, since I need to do that a couple of times per day or even hour.

That's why I opened the first issue #1247 based on my findings with kubie (https://github.com/sbstp/kubie).
This tool allows to have your KUBECONFIGs in different files and you can easily jump from one cluster to another.

But if if the cleanup process is implemented intentionally, is it possible to make that optional? I could perfectly live with some invalid contexts in my k9s config file. For me, that would be the the smallest inconvenience by far compared to the others. 😃

@blacksd
Copy link

blacksd commented Jan 30, 2022

I came here to second this - I have many KUBECONFIGs for lots of various environments, and I hate with a passion losing my favorite namespaces, nodeShell settings, etc. on every switch (happens multiple times a day).

Edit to add: as @derailed mentioned, I already use a method to pass dynamically the single KUBECONFIG I need to my current shell, and no more than one at a time. I purposely want to keep those separated in order not to mix even accidentally cluster access.

@tobilau
Copy link

tobilau commented Apr 13, 2022

I'd like to second this feature as well. This would make working with different clusters and many namespaces even more convenient.
Currently I am maintaining a set of bash alias for connecting to the different clusters e.g.

alias clusterA="k9s --kubeconfig ~/.kubeconfig/clusterA"
alias clusterB="k9s --kubeconfig ~/.kubeconfig/clusterB"

So maybe it would be sufficient if there's an option to specify the config path, e.g.

alias clusterA="k9s --kubeconfig ~/.kubeconfig/clusterA --config ~/.config/k9s/clusterA.yaml"

@gazpwc
Copy link
Author

gazpwc commented Jul 28, 2022

Hi @derailed,

since it seems that I am not alone with that wish, do you see any chance to make that sanitization process optional in one of the future releases? That would hugely enhance my user experience.

@parsec
Copy link

parsec commented Nov 11, 2022

@derailed any updates on this issue? We use kctx at work to manage multiple contexts, and I'd really like to be able to save favorites for each Kubernetes context. But every time I modify the config file and then open k9s, it wipes it to a default file.

I understand that this is "expected behavior" here, but a lot of people use k9s like I do with multiple kube contexts, so I believe this is an important consideration.

I would also be happy to look into submitting a PR for this if you don't mind pointing me in the right direction for where this behavior is in the codebase, I just noticed a long period of radio silence on this issue so was hoping to re-engage it.

derailed pushed a commit that referenced this issue Nov 12, 2023
Co-authored-by: Clément Loiselet <clement.loiselet@cbp-group.com>
rm-hull added a commit to rm-hull/k9s that referenced this issue Nov 12, 2023
* 'master' of github.com:derailed/k9s: (130 commits)
  added flux suspended resources retrieval plugin (derailed#1584)
  Provide white blur so images work in dark modes (derailed#1597)
  Add context to get-all (derailed#1701)
  fix brew command in the readme (derailed#2012)
  Add support for using custom kubeconfig with log_full plugin (derailed#2014)
  feat: allow for multiple plugin files in $XDG_DATA_DIRS/k9s/plugins (derailed#2029)
  Clean up issues introduced by derailed#2125 (derailed#2289)
  Pod view resembles more the output of kubectl get pods -o wide (derailed#2125)
  Update README.md with snap install (derailed#2262)
  Add snapcraft config (derailed#2123)
  storageclasses view keeps the same output as kubectl get sc (derailed#2132)
  Fix merge issues with PR derailed#2168 (derailed#2288)
  Add colour config for container picker (derailed#2140)
  Add env var to disable node pod counts (derailed#2168)
  Use current k9s NS if new context has no default NS (derailed#2197)
  Bump actions/setup-go from 4.0.1 to 4.1.0 (derailed#2200)
  fix: trigger a single log refresh after changing 'since' (derailed#2202)
  Add crossplane plugin (derailed#2204)
  fix(derailed#1359): add option to keep missing clusters in config (derailed#2213)
  K9s release v0.28.2
  ...
@evgmoskalenko
Copy link

evgmoskalenko commented Dec 12, 2023

I also encountered a similar issue with the latest version of k9s, when skins were moved to the config for each cluster :-) Now, after my settings, everything gets constantly overwritten - #2336

This parameter helped me - KeepMissingClusters: true. Although, this seems more like a workaround. Previously, it was more convenient to work with skins.

@fawaf
Copy link

fawaf commented Jan 4, 2024

definite +1 here as well. another option would be to support specifying the k9s config itself on the cli. i don't believe that is an option at this time.

@derailed derailed added enhancement New feature or request and removed AsDesigned Works as designed labels Jan 4, 2024
@derailed
Copy link
Owner

derailed commented Jan 4, 2024

Thank you all for piping in!
I think as of v0.30.x the original ticket can be closed as the specific cluster/context config now live outside the general config and are no longer managed by k9s. Your kubeconfig changes should no longer affect the configurations as it is now the responsibility of the user to axe extraneous clusters/contexts configurations.
As for skins, you can now use a default skin aka ui.skin=xxx in the base config for all your clusters or customiize a specific cluster/context via skin=xxx in that context config.yaml.

@derailed derailed closed this as completed Jan 4, 2024
thejoeejoee pushed a commit to thejoeejoee/k9s that referenced this issue Feb 23, 2024
…railed#2213)

Co-authored-by: Clément Loiselet <clement.loiselet@cbp-group.com>
placintaalexandru pushed a commit to placintaalexandru/k9s that referenced this issue Apr 3, 2024
…railed#2213)

Co-authored-by: Clément Loiselet <clement.loiselet@cbp-group.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants