Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update OpenTelemetry Collector documentation #14542

Merged
merged 5 commits into from
Jul 18, 2022
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions content/en/metrics/otlp.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,14 +121,13 @@ You may add all resource attributes as tags by using the `resource_attributes_as
OpenTelemetry defines certain semantic conventions related to host names. If an OTLP payload has a known hostname attribute, Datadog honors these conventions and tries to use its value as a hostname. The semantic conventions are considered in the following order:

1. `datadog.host.name`, a Datadog-specific hostname convention
2. `k8s.node.name`, the Kubernetes node name
3. Cloud provider-specific conventions, based on the `cloud.provider` semantic convention
4. `host.id`, the unique host ID
5. `host.name`, the system hostname
6. `container.id`, the container ID
1. Cloud provider-specific conventions, based on the `cloud.provider` semantic convention
1. Kubernetes-specific conventions from the `k8s.node.name` and `k8s.cluster.name` semantic conventions.
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
1. `host.id`, the unique host ID
1. `host.name`, the system hostname
mx-psi marked this conversation as resolved.
Show resolved Hide resolved

If none are present, Datadog assigns a system-level hostname to payloads.
On the OpenTelemetry Collector, add the ['resource detection' processor][1] to your pipelines for accurate hostname resolution.
If sending data from a remote host, add the ['resource detection' processor][1] to your pipelines for accurate hostname resolution.

### Example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,14 @@ datadog:
site: {{< region-param key="dd_site" code="true" >}}
```

On each OpenTelemetry-instrumented application, set the resource attributes `deployment.environment`, `service.name`, and `service.version` using [the language's SDK][1]. As a fall-back, you can also configure environment, service name, and service version at the collector level for unified service tagging by following the [example configuration file][7]. The exporter attempts to get a hostname by checking the following sources in order, falling back to the next one if the current one is unavailable or invalid:
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
On each OpenTelemetry-instrumented application, set the resource attributes `deployment.environment`, `service.name`, and `service.version` using [the language's SDK][1].

The exporter attempts to get a hostname by checking the following sources in order, falling back to the next one if the current one is unavailable or invalid:

1. Hostname set in the OTLP resource
1. Manually set hostname in the exporter configuration
1. EC2 non-default hostname (if in EC2 instance)
1. EC2 instance id (if in EC2 instance)
1. Cloud provider API hostname
1. Kubernetes hostname
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
1. Fully qualified domain name
1. Operating system host name

Expand All @@ -42,11 +44,7 @@ The Datadog exporter for the OpenTelemetry Collector is currently in beta. It ma

The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=<path/to/configuration_file>` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environment-specific-setup) section below or the [OpenTelemetry Collector documentation][9].

The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following:
- A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics.
<div class="alert alert-info"><strong>Important!</strong> Without this <code>timeout</code> setting, trace related metrics including <code>.hits</code>, <code>.errors</code>, and <code>.duration</code> for different services and service resources will be inaccurate over periods of time.</div>

Here is an example trace pipeline configured with an `otlp` receiver, `batch` processor, `resourcedetection` processor and `datadog` exporter:
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
Here is an example trace pipeline configured with an `otlp` receiver, `batch` processor and `datadog` exporter:

```
receivers:
Expand All @@ -57,27 +55,22 @@ receivers:

processors:
batch:
timeout: 10s
resourcedetection:
detectors: [gce, ecs, ec2, azure, system]
mx-psi marked this conversation as resolved.
Show resolved Hide resolved

exporters:
datadog/api:

datadog:
host_metadata:
tags:
- example:tag

api:
key: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
site: datadoghq.eu
site: {{< region-param key="dd_site" code="true" >}}

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, resourcedetection]
exporters: [datadog/api]
processors: [batch]
exporters: [datadog]
```

## Environment specific setup
Expand All @@ -98,28 +91,34 @@ service:

Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers).

<div class="alert alert-info">
The latest tag of the OpenTelemetry Collector Contrib distro <a href="https://github.com/open-telemetry/opentelemetry-collector-releases/issues/73">is not updated on every release</a>.
Pin the Collector to the latest version to pick up the latest changes.
</div>

#### Receive traces from host

1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and the Datadog exporter.

2. Choose a published Docker image such as [`otel/opentelemetry-collector-contrib:latest`][12].

3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on several ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include:
3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on several ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `4317`, but common protocols and their ports include:

- Zipkin/HTTP on port `9411`
- Jaeger/gRPC on port `14250`
- Jaeger/HTTP on port `14268`
- Jaeger/Compact on port (UDP) `6831`
- OTLP/gRPC on port `55680`
- OTLP/gRPC on port `4317`
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
- OTLP/HTTP on port `4318`

4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example:

```
$ docker run \
-p 55680:55680 \
-p 4317:4317 \
--hostname $(hostname) \
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
-v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
otel/opentelemetry-collector-contrib
otel/opentelemetry-collector-contrib:<VERSION>
```

5. Configure your application with the appropriate resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter)
Expand All @@ -143,13 +142,15 @@ Run an Opentelemetry Collector container to receive traces either from the [inst
# Datadog Agent
docker run -d --name opentelemetry-collector \
--network <NETWORK_NAME> \
--hostname $(hostname) \
-v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
otel/opentelemetry-collector-contrib
otel/opentelemetry-collector-contrib:<VERSION>

# Application
docker run -d --name app \
--network <NETWORK_NAME> \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \
--hostname $(hostname) \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4317 \
company/app:latest
```

Expand Down Expand Up @@ -219,14 +220,13 @@ A full example Kubernetes manifest for deploying the OpenTelemetry Collector as
# ...
```

3. For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s`, and `k8sattributes` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline.
3. For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor and a `k8sattributes` processor. These should be included along with the `datadog` exporter and added to the `traces` pipeline.

In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section:

```yaml
# ...
batch:
timeout: 10s
k8sattributes:
# ...
```
Expand Down Expand Up @@ -269,7 +269,7 @@ spec:
fieldPath: status.hostIP
# This is picked up by the opentelemetry sdks
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://$(HOST_IP):55680"
value: "http://$(HOST_IP):4317"
```

To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][4].
Expand Down Expand Up @@ -319,7 +319,7 @@ This configuration ensures consistent host metadata and centralizes the configur
[8]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md#pipelines
[9]: https://github.com/open-telemetry/opentelemetry-collector/tree/main/examples
[10]: https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor#batch-processor
[11]: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest
[11]: https://github.com/open-telemetry/opentelemetry-collector-releases/releases/latest
mx-psi marked this conversation as resolved.
Show resolved Hide resolved
[12]: https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags
[13]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/datadogexporter/example/example_k8s_manifest.yaml
[14]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md#running-as-an-agent
Expand Down