diff --git a/content/en/metrics/otlp.md b/content/en/metrics/otlp.md index e3dd027f3cea9..fdd330a28bdeb 100644 --- a/content/en/metrics/otlp.md +++ b/content/en/metrics/otlp.md @@ -121,14 +121,13 @@ You may add all resource attributes as tags by using the `resource_attributes_as OpenTelemetry defines certain semantic conventions related to host names. If an OTLP payload has a known hostname attribute, Datadog honors these conventions and tries to use its value as a hostname. The semantic conventions are considered in the following order: 1. `datadog.host.name`, a Datadog-specific hostname convention -2. `k8s.node.name`, the Kubernetes node name -3. Cloud provider-specific conventions, based on the `cloud.provider` semantic convention -4. `host.id`, the unique host ID -5. `host.name`, the system hostname -6. `container.id`, the container ID +1. Cloud provider-specific conventions, based on the `cloud.provider` semantic convention +1. Kubernetes-specific conventions from the `k8s.node.name` and `k8s.cluster.name` semantic conventions +1. `host.id`, the unique host ID +1. `host.name`, the system hostname If none are present, Datadog assigns a system-level hostname to payloads. -On the OpenTelemetry Collector, add the ['resource detection' processor][1] to your pipelines for accurate hostname resolution. +If sending data from a remote host, add the ['resource detection' processor][1] to your pipelines for accurate hostname resolution. ### Example diff --git a/content/en/tracing/trace_collection/open_standards/otel_collector_datadog_exporter.md b/content/en/tracing/trace_collection/open_standards/otel_collector_datadog_exporter.md index 078eab950b66e..c1f1871bbdc7f 100644 --- a/content/en/tracing/trace_collection/open_standards/otel_collector_datadog_exporter.md +++ b/content/en/tracing/trace_collection/open_standards/otel_collector_datadog_exporter.md @@ -27,12 +27,14 @@ datadog: site: {{< region-param key="dd_site" code="true" >}} ``` -On each OpenTelemetry-instrumented application, set the resource attributes `deployment.environment`, `service.name`, and `service.version` using [the language's SDK][1]. As a fall-back, you can also configure environment, service name, and service version at the collector level for unified service tagging by following the [example configuration file][7]. The exporter attempts to get a hostname by checking the following sources in order, falling back to the next one if the current one is unavailable or invalid: +On each OpenTelemetry-instrumented application, set the resource attributes `deployment.environment`, `service.name`, and `service.version` using [the language's SDK][1]. + +The exporter attempts to get a hostname by checking the following sources in order, falling back to the next one if the current one is unavailable or invalid: 1. Hostname set in the OTLP resource 1. Manually set hostname in the exporter configuration -1. EC2 non-default hostname (if in EC2 instance) -1. EC2 instance id (if in EC2 instance) +1. Cloud provider API hostname +1. Kubernetes hostname 1. Fully qualified domain name 1. Operating system host name @@ -44,11 +46,7 @@ The Datadog exporter for the OpenTelemetry Collector is currently in beta. It ma The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environment-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. -The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following: - - A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics. -
Important! Without this timeout setting, trace related metrics including .hits, .errors, and .duration for different services and service resources will be inaccurate over periods of time.
- -Here is an example trace pipeline configured with an `otlp` receiver, `batch` processor, `resourcedetection` processor and `datadog` exporter: +Here is an example trace pipeline configured with an `otlp` receiver, `batch` processor and `datadog` exporter: ``` receivers: @@ -59,27 +57,22 @@ receivers: processors: batch: - timeout: 10s - resourcedetection: - detectors: [gce, ecs, ec2, azure, system] exporters: - datadog/api: - + datadog: host_metadata: tags: - example:tag - api: key: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa - site: datadoghq.eu + site: {{< region-param key="dd_site" code="true" >}} service: pipelines: traces: receivers: [otlp] - processors: [batch, resourcedetection] - exporters: [datadog/api] + processors: [batch] + exporters: [datadog] ``` ## Environment specific setup @@ -100,28 +93,34 @@ service: Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers). +
+The latest tag of the OpenTelemetry Collector Contrib distro is not updated on every release. +Pin the Collector to the latest version to pick up the latest changes. +
+ #### Receive traces from host 1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and the Datadog exporter. 2. Choose a published Docker image such as [`otel/opentelemetry-collector-contrib:latest`][12]. -3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on several ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: +3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on several ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `4317`, but common protocols and their ports include: - Zipkin/HTTP on port `9411` - Jaeger/gRPC on port `14250` - Jaeger/HTTP on port `14268` - Jaeger/Compact on port (UDP) `6831` - - OTLP/gRPC on port `55680` + - OTLP/gRPC on port `4317` - OTLP/HTTP on port `4318` 4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: ``` $ docker run \ - -p 55680:55680 \ - -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib + -p 4317:4317 \ + --hostname $(hostname) \ + -v $(pwd)/otel_collector_config.yaml:/etc/otelcol-contrib/config.yaml \ + otel/opentelemetry-collector-contrib: ``` 5. Configure your application with the appropriate resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) @@ -145,13 +144,15 @@ Run an Opentelemetry Collector container to receive traces either from the [inst # Datadog Agent docker run -d --name opentelemetry-collector \ --network \ - -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib + --hostname $(hostname) \ + -v $(pwd)/otel_collector_config.yaml:/etc/otelcol-contrib/config.yaml \ + otel/opentelemetry-collector-contrib: # Application docker run -d --name app \ --network \ - -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \ + --hostname $(hostname) \ + -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:4317 \ company/app:latest ``` @@ -221,14 +222,13 @@ A full example Kubernetes manifest for deploying the OpenTelemetry Collector as # ... ``` -3. For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s`, and `k8sattributes` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline. +3. For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor and a `k8sattributes` processor. These should be included along with the `datadog` exporter and added to the `traces` pipeline. In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section: ```yaml # ... batch: - timeout: 10s k8sattributes: # ... ``` @@ -271,7 +271,7 @@ spec: fieldPath: status.hostIP # This is picked up by the opentelemetry sdks - name: OTEL_EXPORTER_OTLP_ENDPOINT - value: "http://$(HOST_IP):55680" + value: "http://$(HOST_IP):4317" ``` To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][4]. @@ -321,7 +321,7 @@ This configuration ensures consistent host metadata and centralizes the configur [8]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md#pipelines [9]: https://github.com/open-telemetry/opentelemetry-collector/tree/main/examples [10]: https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor#batch-processor -[11]: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest +[11]: https://github.com/open-telemetry/opentelemetry-collector-releases/releases/latest [12]: https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags [13]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/datadogexporter/example/example_k8s_manifest.yaml [14]: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md#running-as-an-agent