Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leverage KUBECONFIG when creating k8s client #13916

Merged
merged 6 commits into from
Oct 10, 2019

Conversation

ChrsMark
Copy link
Member

@ChrsMark ChrsMark commented Oct 4, 2019

Closes #13648.

This PR changes the k8s Client creation so as:

  1. if kube_config is provided return a client using it
  2. if KUBECONFIG env var is present and points to a valid file return a client using it
  3. then you are in an in_cluster mode so use inClusterConfig (this is implemented in BuildConfigFromFlags already)

cc: @exekias , @odacremolbap

@ChrsMark ChrsMark self-assigned this Oct 4, 2019
@ChrsMark ChrsMark added the containers Related to containers use case label Oct 4, 2019
@ChrsMark ChrsMark force-pushed the add_KUBECONFIG_usage branch 2 times, most recently from c8d2334 to 196374a Compare October 9, 2019 07:00
@odacremolbap
Copy link
Contributor

I don't know how the elastic config library works, and probably that's something to take into account here. Usually precedence regarding configuration variables would be:

  • command line args
  • config file when not using a default location
  • environment variables
  • default config file

Is this the intended scenario? If not, can you clarify what the target is? it is not 100% clear reading the related issue

On implementation, it would be nice to have a GetConfigFile function that would return whatever configuration file to be used, or empty if none found and falling down to inCluster. This would make GetKubernetesClient a bit dumber, which I think is a good thing

@ChrsMark
Copy link
Member Author

ChrsMark commented Oct 9, 2019

The implementation focuses on the description that is summarised on #13648 (comment).

Currently we have kube_config setting which, if empty, the Client initialization falls to in_cluster mode.

So with this feature we add an extra step to check if KUBECONFIG is present and make use of it.

The aim is to leverage the KUBECONFIG environmental variable in case the kube_config is not set before trying in_cluster mode when the Client initialisation takes place. Client initialisation is being called from different parts (autodiscovery, add_kubernetes_metadata, k8s modules) so introducing a new function to initialize the config in the way you suggest means that all these parts should make the use of this function. However with the current approach just a thin layer is added at the point that is actually required. Makes sense?

@odacremolbap
Copy link
Contributor

yep, Iike the idea and the original issue is clear about the intention.

I still miss the general broad view of args precedence: https://www.elastic.co/guide/en/beats/libbeat/master/config-file-format-cli.html

so, if I'm not wrong, precedence will be:

  • command line flags (overwriting kubeconfig settings for the config file)
  • config file
  • environment variable
  • fallback to in-cluster

That sounds good, although I'm hesitant if $HOME/.kube/config should be in the list as would be expected for any kubernetes client.

Can you add to the kubernetes docs how it works as of your changes?

@ChrsMark
Copy link
Member Author

ChrsMark commented Oct 9, 2019

yep, Iike the idea and the original issue is clear about the intention.

I still miss the general broad view of args precedence: https://www.elastic.co/guide/en/beats/libbeat/master/config-file-format-cli.html

so, if I'm not wrong, precedence will be:

  • command line flags (overwriting kubeconfig settings for the config file)
  • config file
  • environment variable
  • fallback to in-cluster

I'm not sure if a user would like to use command line flags to set Module's specific settings. In our case it will look like ./metricbeat -E "metricbeat.modules.kubernetes.kube_config=somepath" 🤔 .

However, I think that this is not a concern for this PR, since the behaviour of flags is to overwrite specific configuration settings, and this patch comes after the whole configuration is already loaded.

That sounds good, although I'm hesitant if $HOME/.kube/config should be in the list as would be expected for any kubernetes client.

Regarding $HOME/.kube/config I would propose to open a different issue to discuss about this.

Can you add to the kubernetes docs how it works as of your changes?

Sure thing I will update the docs!

@odacremolbap
Copy link
Contributor

me agrees all

@ChrsMark ChrsMark requested a review from a team as a code owner October 9, 2019 12:39
@ChrsMark ChrsMark force-pushed the add_KUBECONFIG_usage branch from 95d2c8f to 6f687b7 Compare October 9, 2019 12:47
@ChrsMark
Copy link
Member Author

ChrsMark commented Oct 9, 2019

TBH I'm not sure if these doc's changes are enough but we don't have a unique place to document all k8s related configuration options for autodiscovery, adds_k8s_metadata, kubernetes module. Maybe we could somehow unify these? @exekias wdyt?

@exekias
Copy link
Contributor

exekias commented Oct 9, 2019

That's a great idea. Maybe you can open another PR to unify kube provider parameters somehow!. That said, I would not block this PR from getting in

@ChrsMark ChrsMark force-pushed the add_KUBECONFIG_usage branch from 6f687b7 to 69bd269 Compare October 10, 2019 08:55
@ChrsMark
Copy link
Member Author

ChrsMark commented Oct 10, 2019

That's a great idea. Maybe you can open another PR to unify kube provider parameters somehow!. That said, I would not block this PR from getting in

I opened an issue for this documentation topic (#13997).
Anything else for this specific PR I should add/change?

Copy link
Contributor

@exekias exekias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking this! left a minor comment

@@ -1339,6 +1339,7 @@ the Kubernetes node.
processors:
- add_kubernetes_metadata:
host: <hostname>
# It defaults to `KUBECONFIG` variable if present.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you update these comments to make sure it says environment variable instead? Just in case, to avoid confusion

@@ -79,6 +79,7 @@ metricbeat.modules:
add_metadata: true
# When used outside the cluster:
#host: node_name
# It defaults to `KUBECONFIG` environment variable if present.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my POV this is not accurate.
A user could think that even if the kube_config element is informed, KUBECONFIG will overwrite it, but that's not the case.

Something like if kube_config is not informed, KUBECONFIG env var will be checked and if not present it will fall back to InCluster

WDYT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you are right about the overwriting confusion I ll go along and try to change it.

Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
@ChrsMark ChrsMark force-pushed the add_KUBECONFIG_usage branch from 519dab2 to 3ad34a6 Compare October 10, 2019 13:13
@ChrsMark
Copy link
Member Author

jenkins, test this again please

@ChrsMark ChrsMark merged commit 1c9cd3d into elastic:master Oct 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
containers Related to containers use case enhancement Team:Integrations Label for the Integrations team v7.5.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consider leveraging KUBECONFIG for all our Kubernetes features
4 participants