Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to get cluster-name, while we are sending notifications to ms teams #382

Open
Naresh240 opened this issue Jun 9, 2022 · 12 comments

Comments

@Naresh240
Copy link

We are trying to send notifications from different cluster, nearly we are creating 10 clusters at a time, need to get notifications to same channel with cluster name. so that we can easily identify from which cluster we are getting notification.

Please help where I need to mention about cluster name?

@aryan9600
Copy link
Member

You can mention it in the .spec.summary of your Alert. Ref: https://fluxcd.io/docs/components/notification/alert/

@Naresh240
Copy link
Author

In .spec.summary we need to mention like as message, but we don't like to touch our Alert every time. While getting notification in our channel value should be replace with cluster name automatically. Any possibility we have?

@somtochiama
Copy link
Member

@Naresh240 if what you mean is the cluster name being automatically added to the alert, that is currently not supported.

@makkes
Copy link
Member

makkes commented Aug 23, 2022

I think it would generally make sense to have a means of exposing a cluster name through the Events API. However, the question is, how? One could use the .metadata.uid of the kube-system Namespace which is assumed to be (1) available on every Kubernetes cluster and (2) unique. That ID changes every time the cluster is created which might happen on a regular basis, especially in dev environments. You don't want your handle to change every time. Therefore, notification-controller must provide an API to let users explicitly set the cluster name/ID they expect alerts from that cluster to carry and I'm led to believe that .spec.summary might just be the right field for that. It can easily be set per-cluster using kustomization overlays. Or maybe a dedicated .spec.origin field would help here?

@stefanprodan
Copy link
Member

The cluster name is unknown to Kubernetes itself, so there is no way to get this info automatically. You can use a kustomize patch that targets all Alerts objects and sets the cluster name in .spec.summary.

@makkes
Copy link
Member

makkes commented Aug 23, 2022

The cluster name is unknown to Kubernetes itself, so there is no way to get this info automatically. You can use a kustomize patch that targets all Alerts objects and sets the cluster name in .spec.summary.

How about adding a dedicated origin field (or however we choose to call it) to the Alert spec?

@stefanprodan
Copy link
Member

How about adding a dedicated origin field (or however we choose to call it) to the Alert spec?

Why add a 2nd field that does the same thing as summary? I would consider adding a cmd flag to notification-controller e.g. --cluster-name that would inject the value in the alert message body.

@adberger
Copy link

@stefanprodan How could an implementation of this look like when using Grafana Annotations?
The summary field doesn't seem to be attached in https://github.com/fsequeira1/notification-controller/blob/main/internal/notifier/grafana.go or am I'm missing something?

Would be nice to show somehow from which cluster the annotations comes from.
We have a Flux Dashboards for multiple clusters and it's very hard to distinguish, which annotations is from which cluster.

@joelhess
Copy link

This feels like a big deal. We want to use Flux to orchestrate deployments to a fleet of Kube Clusters. If we don't know where the events are coming back from, then its value gets a little muddy.

@makkes
Copy link
Member

makkes commented Apr 24, 2023

This feels like a big deal. We want to use Flux to orchestrate deployments to a fleet of Kube Clusters. If we don't know where the events are coming back from, then its value gets a little muddy.

What we (Weaveworks) did in Weave GitOps Enterprise is to instantiate an endpoint on the management cluster for each managed cluster so that each managed cluster hits a unique endpoint.

@devnev
Copy link

devnev commented Oct 2, 2024

@makkes is there a solution that didn't involve a management cluster? The current state makes alerts very difficult to use.

@stefanprodan
Copy link
Member

stefanprodan commented Oct 2, 2024

This can be achieved with Flux variable substitutions.

At cluster creation you would generate a ConfigMap (from Terraform or another IaC tool) with the cluster name, region, env, etc like so:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-info
  namespace: flux-system
data:
  cluster_name: my-cluster
  cluster_region: my-region

In the Flux Kustomization that applies the Alerts you would enable variable substitution like so:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: alerts
  namespace: flux-system
spec:
  postBuild:
    substituteFrom:
      - kind: ConfigMap
        name: cluster-info

Finally, in the Alert manifests you would set the variables:

apiVersion: notification.toolkit.fluxcd.io/v1beta3
kind: Alert
metadata:
  name: msteams
  namespace: flux-system
spec:
  eventMetadata:
    app.kubernetes.io/cluster: "${cluster_name}"
    app.kubernetes.io/region: "${cluster_region}"
  providerRef:
    name: msteams
  eventSources:
    - kind: HelmRelease
      name: '*'
      namespace: apps
    - kind: HelmRelease
      name: '*'
      namespace: addons

The eventMetadata is injected by notification-controller in the Teams payload and displayed in the message body.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants