Skip to content

Commit

Permalink
feat: how to use custom Otel collector.
Browse files Browse the repository at this point in the history
Updates the telemetry documentation explaning how to configure
Kubewarden stack to send data to a Otel collector sidecar or to custom
Otel collector running somewhere in the cluster.

Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Co-authored-by: John Krug <john.krug@suse.com>
  • Loading branch information
jvanz and jhkrug committed Dec 18, 2024
1 parent 0ed3bc4 commit 5ff6ce2
Show file tree
Hide file tree
Showing 5 changed files with 378 additions and 59 deletions.
14 changes: 11 additions & 3 deletions docs/howtos/telemetry/10-opentelemetry-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,24 @@ observability. It enables your microservices to provide metrics, logs and traces
Kubewarden's components are instrumented with the OpenTelemetry SDK, reporting data to an
OpenTelemetry collector -- called the agent.

By following this documentation, we will integrate OpenTelemetry using the following architecture:
You can configure the OpenTelemetry collector for Kubewarden in two modes:
- Sidecar: you configure the OpenTelemetry collector in the Kubewarden
Helm chart. It is deployed as a sidecar container in the same Pod as the
Kubewarden component.
- Custom mode: you deploy the OpenTelemetry collector in the same cluster. The
Kubewarden components send data to it.

Using this documentation, you integrate OpenTelemetry in sidecar mode and the following architecture:

- Each Pod of the Kubewarden stack will have a OpenTelemetry sidecar.
- The sidecar receives tracing and monitoring information from the Kubewarden component via the OpenTelemetry Protocol (OTLP)
- The OpenTelemetry collector will:
- Send the trace events to a central Jaeger instance
- Expose Prometheus metrics on a specific port

For more information about the other deployment modes, please refer to the [OpenTelemetry official
documentation](https://opentelemetry.io/docs/).
For more information about the other deployment modes, refer to the
[OpenTelemetry official documentation](https://opentelemetry.io/docs/). The custom mode section has more about integrating
Kubewarden with a custom OpenTelemetry collector.

Let's first deploy OpenTelemetry in a Kubernetes cluster, so we can reuse it in the next sections
that will address specifically tracing and metrics.
Expand Down
16 changes: 9 additions & 7 deletions docs/howtos/telemetry/20-tracing-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,12 +118,14 @@ following contents:

```yaml
telemetry:
tracing:
enabled: True
jaeger:
endpoint: "my-open-telemetry-collector.jaeger.svc.cluster.local:4317"
tls:
insecure: true
mode: sidecar
tracing: True
sidecar:
tracing:
jaeger:
endpoint: "my-open-telemetry-collector.jaeger.svc.cluster.local:4317"
tls:
insecure: true
```
:::caution
Expand Down Expand Up @@ -183,7 +185,7 @@ Next, let's define a ClusterAdmissionPolicy:

```yaml
kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
apiVersion: policies.kubewarden.io/v1
kind: ClusterAdmissionPolicy
metadata:
name: safe-labels
Expand Down
92 changes: 43 additions & 49 deletions docs/howtos/telemetry/30-metrics-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,44 @@ that allows us to define Prometheus' Targets intuitively.
There are many ways to install and set up Prometheus. For ease of deployment, we will use the
Prometheus community Helm chart.

The `prometheus-operator` deployed as part of this Helm chart defines the concept of [Service
Monitors](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor),
to define which services should be monitored by Prometheus declaratively.

In our case, we are adding a ServiceMonitor targeting the `kubewarden` namespace for services that
match labels `app=kubewarden-policy-server-default` and `app.kubernetes.io/name: kubewarden-controller`.
This way, the Prometheus Operator can inspect which Kubernetes Endpoints are tied to services matching these conditions.

Let's create the two ServiceMonitors named `kubewarden-controller` and `kubewarden-policy-server` to be used by the
default prometheus instance installed by the Helm chart. For that, you can create the following values file:

```console
cat <<EOF > kube-prometheus-stack-values.yaml
prometheus:
additionalServiceMonitors:
- name: kubewarden
selector:
matchLabels:
app: kubewarden-policy-server-default
namespaceSelector:
matchNames:
- kubewarden
endpoints:
- port: metrics
interval: 10s
- name: kubewarden-controller
selector:
matchLabels:
app.kubernetes.io/name: kubewarden-controller
namespaceSelector:
matchNames:
- kubewarden
endpoints:
- port: metrics
interval: 10s
EOF
```

Let's install the Prometheus stack Helm Chart:

:::note
Expand All @@ -47,52 +85,6 @@ helm install --wait --create-namespace \
prometheus prometheus-community/kube-prometheus-stack
```

The `prometheus-operator` deployed as part of this Helm chart defines the concept of [Service
Monitors](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor),
to define which services should be monitored by Prometheus declaratively.

In our case, we are adding a ServiceMonitor targeting the `kubewarden` namespace for services that
match labels `app=kubewarden-policy-server-default` and `app.kubernetes.io/name: kubewarden-controller`.
This way, the Prometheus Operator can inspect which Kubernetes Endpoints are tied to services matching these conditions.

Let's create the two ServiceMonitors named `kubewarden-controller` and `kubewarden-policy-server` using the following manifests:

```yaml
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kubewarden-controller
namespace: kubewarden
spec:
endpoints:
- interval: 10s
port: metrics
namespaceSelector:
matchNames:
- kubewarden
selector:
matchLabels:
app.kubernetes.io/name: kubewarden-controller
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kubewarden-policy-server
namespace: kubewarden
spec:
endpoints:
- interval: 10s
port: metrics
namespaceSelector:
matchNames:
- kubewarden
selector:
matchLabels:
app: kubewarden-policy-server-default
EOF
```

## Install Kubewarden

We can now install Kubewarden in the recommended way with Helm charts.
Expand Down Expand Up @@ -125,9 +117,11 @@ in Kubewarden. Write the `kubewarden-values.yaml` file with the following conten

```yaml
telemetry:
metrics:
enabled: True
port: 8080
mode: sidecar
metrics: True
sidecar:
metrics:
port: 8080
```
Now, let's install the helm charts:
Expand Down
Loading

0 comments on commit 5ff6ce2

Please sign in to comment.