Skip to content

dengliming/opentelemetry-operator

 
 

Repository files navigation

Continuous Integration Go Report Card GoDoc Maintainability codecov Repository on Quay

OpenTelemetry Operator for Kubernetes

The OpenTelemetry Operator is an implementation of a Kubernetes Operator.

At this point, it has OpenTelemetry Collector as the only managed component.

Getting started

To run the operator locally, run:

make install run

Once the opentelemetry-operator deployment is ready, create an OpenTelemetry Collector (otelcol) instance, like:

$ kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: simplest
spec:
  config: |
    receivers:
      jaeger:
        protocols:
          grpc:
    processors:
      queued_retry:

    exporters:
      logging:

    service:
      pipelines:
        traces:
          receivers: [jaeger]
          processors: [queued_retry]
          exporters: [logging]
EOF

WARNING: Until the OpenTelemetry Collector format is stable, changes may be required in the above example to remain compatible with the latest version of the OpenTelemetry Collector image being referenced.

This will create an OpenTelemetry Collector instance named simplest, exposing a jaeger-grpc port to consume spans from your instrumented applications and exporting those spans via logging, which writes the spans to the console (stdout) of the OpenTelemetry Collector instance that receives the span.

The config node holds the YAML that should be passed down as-is to the underlying OpenTelemetry Collector instances. Refer to the OpenTelemetry Collector documentation for a reference of the possible entries.

At this point, the Operator does not validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.

Deployment modes

The CustomResource for the OpenTelemetryCollector exposes a property named .Spec.Mode, which can be used to specify whether the collector should run as a DaemonSet or as a Deployment (default). Look at the examples/daemonset.yaml for reference.

Running with the webhooks

When running make run, the webhooks aren't effective as it starts the manager in the local machine instead of in-cluster. To test the webhooks, you'll need to:

  1. configure a proxy between the Kubernetes API server and your host, so that it can contact the webhook in your local machine
  2. create the TLS certificates and place them, by default, on /tmp/k8s-webhook-server/serving-certs/tls.crt. The Kubernetes API server has also to be configured to trust the CA used to generate those certs.

In general, it's just easier to deploy the manager in a Kubernetes cluster instead. For that, you'll need the cert-manager installed. You can install it by running:

make cert-manager

Once it's ready, the following can be used to build and deploy a manager, along with the required webhook configuration:

make manifests docker-build docker-push deploy

Contributing and Developing

Please see CONTRIBUTING.md.

Testing

With an existing cluster (such as minikube), run:

USE_EXISTING_CLUSTER=true make test

Tests can also be run without an existing cluster. For that, install kubebuilder. In this case, the tests will bootstrap etcd and kubernetes-api-server for the tests. Run against an existing cluster whenever possible, though.

License

Apache 2.0 License.

About

Kubernetes Operator for OpenTelemetry Collector

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 96.5%
  • Makefile 2.8%
  • Other 0.7%