Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] OpenTelemetry - add OpenTelemetry protocol support content #4964

Merged
merged 8 commits into from
Mar 22, 2021
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
326 changes: 174 additions & 152 deletions docs/guide/opentelemetry-elastic.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,76 +14,53 @@
// Make tab-widgets work
EamonnTP marked this conversation as resolved.
Show resolved Hide resolved
include::../tab-widgets/code.asciidoc[]

Elastic's OpenTelemetry integration allows you to reuse your existing OpenTelemetry
https://opentelemetry.io/docs/concepts/what-is-opentelemetry/[OpenTelemetry] is a set
of APIs, SDKs, tooling, and integrations that enable the capture and management of
telemetry data from your services for greater observability. For more information about the
OpenTelemetry project, see the {ot-spec}[spec].

Elastic OpenTelemetry integrations allow you to reuse your existing OpenTelemetry
instrumentation to quickly analyze distributed traces and metrics to help you monitor
business KPIs and technical components with the {stack}.

[[what-is-opentelemetry]]
==== What is OpenTelemetry?

OpenTelemetry is a set of APIs, SDKs, tooling, and integrations that enable the creation and
management of telemetry data. It formed through a merger of the OpenTracing and OpenCensus projects.
There are two Elastic OpenTelemetry integrations available:

OpenTelemetry is an open-source project that provides the components necessary to observe your applications and services.
If you're unfamiliar with the project, see the {ot-spec}[spec] for more information.
* <<open-telemetry-elastic-exporter,Elastic exporter on the OpenTelemetry collector>> (recommmended)
* <<open-telemetry-elastic-protocol,APM Server native support of OpenTelemetry protocol>> (experimental)

[float]
[[open-telemetry-elastic-exporter]]
==== Elastic exporter

Elastic's integration is designed to drop into your current OpenTelemetry setup.
We've done this by extending the "contrib" OpenTelemetry collector and adding an Elastic exporter.
==== Elastic exporter on the OpenTelemetry collector

Before sending the data to the {stack}, the exporter translates the OpenTelemetry trace data collected from your services
and the metrics data collected from your applications and infrastructure to Elastic's protocol.
By extending the OpenTelemetry collector, no changes are needed in your instrumented services to begin using the {stack}.

TIP: To collect infrastructure metrics, we still recommend using
{metricbeat-ref}/metricbeat-overview.html[{metricbeat}] to get a mature collector with more integrations
and integrated visualizations.

image::images/open-telemetry-elastic-arch.png[OpenTelemetry Elastic architecture diagram]

[float]
[[open-telemetry-elastic-works]]
==== How the OpenTelemetry Collector works
The is the recommended OpenTelemetry integration. We have extended the "contrib" OpenTelemetry collector by
adding an Elastic exporter so that you can drop this integration into your current OpenTelemetry setup.

The OpenTelemetry collector uses the following three types of components to handle data:
The architecture consists of three main components.

* `receivers`: Configure how data gets to the collector. At least one receiver must be configured.
* `processors`: Defines optional transformations that occur between receiving and exporting data.
* `exporters`: Configures how data is sent to its destination--in this case, the {stack}.
image::images/open-telemetry-exporter-arch.png[OpenTelemetry Elastic exporter architecture diagram]

Once a `receiver`, `processor`, and `exporter` is defined, `pipelines` can be configured in the `services` section of your configuration.
The `traces` and `metrics` pipelines define the path of trace data and metrics through your collector and bring all three of these components together.
|===

TIP: More information is available in the {ot-pipelines}[OpenTelemetry pipeline docs].
| *Agents* | The OpenTelemetry agents instrument the applications and export the telemetry data to the OpenTelemetry collector.

A final note: `extensions` can also be enabled for tasks like monitoring the health of the collector.
See the {ot-extension}[OpenTelemetry extension readme] for a list of supported extensions.
| *OpenTelemetry collector* | The https://opentelemetry.io/docs/collector/configuration/#a-namereceiversaimg-width35-srchttpsrawgithubcomopen-telemetryopentelemetryiomainiconography32x32receiverssvgimg-receivers[receiver]
collects the telemetry data from the OpenTelemetry agent, and then the https://opentelemetry.io/docs/collector/configuration/#a-nameprocessorsaimg-width35-srchttpsrawgithubcomopen-telemetryopentelemetryiomainiconography32x32processorssvgimg-processors[processor]
EamonnTP marked this conversation as resolved.
Show resolved Hide resolved
defines optional transformations on the data before it's exported using the Elastic exporter.

[[open-telemetry-elastic-get-started]]
==== Get started
| *Elastic exporter* | The exporter translates the OpenTelemetry data collected from your services, applications, and infrastructure to Elastic's protocol.
The data includes trace data and metrics data. By extending the OpenTelemetry collector, no changes are needed in your instrumented services to begin using the {stack}.

NOTE: This guide assumes you've already instrumented your services with the OpenTelemetry API and/or SDK.
If you haven't, see the Elastic APM <<install-and-run,install and run guide>> to get started with Elastic APM Agents instead.
|===

[float]
[[open-telemetry-elastic-deployment-planning]]
==== Plan your deployment
[[open-telemetry-collector-config]]
===== Download and configure the collector

OpenTelemetry Collectors can be run as an Agent, or as standalone collectors.
They can be deployed as often as necessary and scaled up or out.
OpenTelemetry Collectors can be run as agents or as standalone collectors.
They can be deployed as often as necessary and scaled up or out. Deployment planning resources are available in
OpenTelemetry's {ot-collector}[Getting Started] documentation and {ot-scaling}[Collector Performance] research.

Deployment planning resources are available in OpenTelemetry's {ot-collector}[Getting Started]
documentation and {ot-scaling}[Collector Performance] research.

[float]
[[open-telemetry-elastic-download]]
==== Download the collector

The Elastic exporter lives in the {ot-contrib}[`opentelemetry-collector-contrib` repository],
and the latest release can be downloaded from {ot-contrib}/releases[GitHub releases page].
You can download the latest release of the Collector from the {ot-contrib}/releases[GitHub releases page]. The Elastic exporter
lives in the {ot-contrib}[`opentelemetry-collector-contrib` repository].

Docker images are available on {ot-dockerhub}[dockerhub]:

Expand All @@ -92,49 +69,7 @@ Docker images are available on {ot-dockerhub}[dockerhub]:
docker pull otel/opentelemetry-collector-contrib
----

[[open-telemetry-elastic-traces-metrics]]
==== Collect traces and metrics

NOTE: This guide assumes your services and applications have already been instrumented with the OpenTelemetry API and/or SDK.
If you are new to APM, we recommend <<install-and-run,getting started with Elastic APM agents>> instead.

To export traces and metrics to the OpenTelemetry Collector, ensure that you have instrumented your services and applications
with the OpenTelemetry API and/or SDK.

Here is an example of how to set up the OpenTelemetry Java agent.

[source,bash]
----
export OTEL_RESOURCE_ATTRIBUTES=service.name=frontend,service.version=1.0-SNAPSHOT,deployment.environment=staging
java -javaagent:/path/to/opentelemetry-javaagent-all-0.15.0.jar \
-Dotel.otlp.endpoint=http://my-otel-collector.mycompany.com:4317 \
-jar target/frontend-1.0-SNAPSHOT.jar
----

Here is an example of how to capture business metrics from an application.

[source,java]
----
// initialize metric
Meter meter = GlobalMetricsProvider.getMeter("my-frontend");
DoubleCounter orderValueCounter = meter.doubleCounterBuilder("order_value").build();

public void createOrder(HttpServletRequest request) {

// create order in the database
...
// increment business metrics for monitoring
orderValueCounter.add(orderPrice);
}
----

For more information on setting up OpenTelemetry, see OpenTelemetry's {ot-collector}[Getting Started].

[float]
[[open-telemetry-elastic-configure]]
==== Configure the collector

Create a `yaml` configuration file.
To configure the collector, create a `yaml` configuration file.

This example configuration file accepts input from an OpenTelemetry Agent, processes the data, and sends it to an {ess} instance.

Expand Down Expand Up @@ -176,33 +111,160 @@ This example configuration file accepts input from an OpenTelemetry Agent, proce
- elastic <4>
----
<1> The `hostmetrics` receiver must be defined to generate metrics about the host system scraped from various sources.
<2> At a minimum, you must define the URL of the APM Server instance you are sending data to. See the <<open-telemetry-elastic-config-ref,configuration reference>>
for additional configuration options, like specifying an API key, secret token, or TLS settings.
<3> To translate metrics, the Elastic exporter must be defined in `service.pipelines.metrics.exporters`.
<4> To translate trace data, the Elastic exporter must be defined in `service.pipelines.traces.exporters`.
<2> At a minimum, you must define the URL of the APM Server instance you are sending data to. For additional configurations,
like specifying an API key, secret token, or TLS settings, see the Elastic exporter <<open-telemetry-elastic-config,configuration options>>.
<3> To translate metrics, you must define the Elastic exporter in `service.pipelines.metrics.exporters`.
<4> To translate trace data, you must define the Elastic exporter in `service.pipelines.traces.exporters`.

Once a `receiver`, `processor`, and `exporter` are defined, you can configure {ot-pipelines}[`pipelines`] in your configuration's `services` section.
The `traces` and `metrics` pipelines represent the path of trace data and metrics through your collector and bring all three of these components together.
You can also enable {ot-extension}[`extensions`] for tasks like monitoring the health of the collector.

TIP: We recommend using {metricbeat-ref}/metricbeat-overview.html[{metricbeat}] to get a mature collector with more integrations
and integrated visualizations to collect infrastructure metrics.

[float]
[[open-telemetry-elastic-config]]
===== Elastic exporter configuration options

|===

| `apm_server_url` | Elastic APM Server URL. (required).

| `api_key` | Credential for {apm-server-ref-v}/api-key.html[API key authorization]. Must also be enabled in Elastic APM Server. (optional)

| `secret_token` | Credential for {apm-server-ref-v}/secret-token.html[secret token authorization]. Must also be enabled in Elastic APM Server. (optional)

| `ca_file` | Root Certificate Authority (CA) certificate for verifying the server's identity if TLS is enabled. (optional)

| `cert_file` | Client TLS certificate. (optional)

| `key_file` | Client TLS key. (optional)

| `insecure` | Disable verification of the server's identity if TLS is enabled. (optional)

|===

[float]
[[instrument-apps-collector]]
===== Instrument applications

To export traces and metrics to the OpenTelemetry Collector, ensure that you have instrumented your services and applications
with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app using the
https://github.com/open-telemetry/opentelemetry-java-instrumentation[OpenTelemetry agent for Java].

By defining the following environment variables, you can customize the OTLP endpoint the agent will use to communicate with
APM Server.

[source,bash]
----
export OTEL_RESOURCE_ATTRIBUTES=service.name=frontend,service.version=1.1,deployment.environment=staging
export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200
java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
-jar target/frontend-1.1.jar
----

|===

| `OTEL_RESOURCE_ATTRIBUTES` | The service name to identify your application.

| `OTEL_EXPORTER_OTLP_ENDPOINT` | APM Server URL. The host and port that APM Server listens for events on.

|===

You are now ready to collect <<open-telemetry-elastic-traces-metrics,traces and metrics>>, <<open-telemetry-elastic-verify,verify metrics>>,
and <<open-telemetry-elastic-kibana,visualize metrics>> in {kib}.

[[open-telemetry-elastic-protocol]]
==== APM Server native support of OpenTelemetry protocol

NOTE: For more information about getting started with an OpenTelemetry Collector,
see the {ot-collector}[OpenTelemetry collector] docs.
experimental::[This feature is experimental and may be changed in a future release. It is only available in a self-managed environment.]

The APM Server native support of the OpenTelemetry protocol allows you to send collected telemetry data directly from your applications to APM Server.
Trace data collected from your services and the metrics data collected from your applications and infrastructure are sent using the
OpenTelemetry protocol.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the same as the "Elastic's protocol" that you refer to previously? If so, I wound be consistent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No this is different. This refers to the OpenTelemetry protocol (OTLP).

Copy link
Member

@bmorelli25 bmorelli25 Mar 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Elastic's protocol" refers to how Elastic APM Agents send data to a collector, like APM Server. "OTLP" refers to how OpenTelemetry agents send data to a collector, like the OTEL collector. APM Server can now receive native OTLP data.


image::images/open-telemetry-protocol-arch.png[OpenTelemetry Elastic protocol architecture diagram]

[float]
[[open-telemetry-elastic-caveats]]
==== Caveats
[[instrument-apps-apm-server]]
===== Instrument applications

To export traces and metrics to APM Server, ensure that you have instrumented your services and applications
with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app using the
https://github.com/open-telemetry/opentelemetry-java-instrumentation[OpenTelemetry agent for Java].

By defining the following environment variables, you can customize the OTLP endpoint so that the OpenTelemetry agent communicates with
APM Server.

[source,bash]
----
export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production
export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer apm_secret_token"
java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
-classpath lib/*:classes/ \
com.mycompany.checkout.CheckoutServiceServer
----

|===

| `OTEL_RESOURCE_ATTRIBUTES` | The service name to identify your application.

| `OTEL_EXPORTER_OTLP_ENDPOINT` | APM Server URL. The host and port that APM Server listens for events on.

| `OTEL_EXPORTER_OTLP_HEADERS` | Authorization header that includes the Elastic APM Secret token or API key: `"Authorization=ApiKey api_key"`.

For information on how to format an API key, see our {apm-server-ref-v}/api-key.html[API key] docs.

Please note the required space between `Bearer` and `apm_secret_token`, and `APIKey` and `api_key`.

| `OTEL_EXPORTER_OTLP_CERTIFICATE` | Certificate for TLS credentials of the gRPC client. (optional)

|===

You are now ready to collect <<open-telemetry-elastic-traces-metrics,traces and metrics>>, <<open-telemetry-elastic-verify,verify metrics>>,
and <<open-telemetry-elastic-kibana,visualize metrics>> in {kib}.

[float]
[[open-telemetry-elastic-traces-metrics]]
==== Collect traces and metrics

To export traces and metrics, ensure that you have instrumented your services and applications
with the OpenTelemetry API, SDK, or both.

Here is an example of how to capture business metrics from a Java application.

If collecting metrics, please note that the https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/DoubleValueRecorder.html[`DoubleValueRecorder`]
[source,java]
----
// initialize metric
Meter meter = GlobalMetricsProvider.getMeter("my-frontend");
DoubleCounter orderValueCounter = meter.doubleCounterBuilder("order_value").build();

public void createOrder(HttpServletRequest request) {

// create order in the database
...
// increment business metrics for monitoring
orderValueCounter.add(orderPrice);
}
----

IMPORTANT: If collecting metrics, please note that the https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/DoubleValueRecorder.html[`DoubleValueRecorder`]
and https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/LongValueObserver.html[`LongValueRecorder`] metrics are not yet supported.

[[open-telemetry-elastic-verify]]
==== Verify OpenTelemetry metrics data

Use *Discover* to validate that metrics are successfully being reported to {kib}.
Use *Discover* to validate that metrics are successfully reported to {kib}.

. Launch {kib}:
+
--
include::../tab-widgets/open-kibana-widget.asciidoc[]
--

. In the side navigation, under *{kib}*, select *Discover*.
. Open the main menu, then click *Discover*.
. Select `apm-*` as your index pattern.
. Filter the data to only show documents with metrics: `processor.name :"metric"`
. Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return
Expand All @@ -211,18 +273,18 @@ only OpenTelemetry metrics documents.
[[open-telemetry-elastic-kibana]]
==== Visualize in {kib}

TSVB within {kib} is the recommended visualization for OpenTelemetry metrics. TSVB is a time series data visualizer that allows you to use the full power of the
{es} aggregation framework. With TSVB, you can combine an infinite number of aggregations to display complex data.
TSVB within {kib} is the recommended visualization for OpenTelemetry metrics. TSVB is a time series data visualizer that allows you to use the
{es} aggregation framework's full power. With TSVB, you can combine an infinite number of aggregations to display complex data.

In this example eCommerce OpenTelemetry dashboard, there are four visualizations; sales, order count, product cache, and system load. The dashboard provides us with business
In this example eCommerce OpenTelemetry dashboard, there are four visualizations: sales, order count, product cache, and system load. The dashboard provides us with business
KPI metrics, along with performance-related metrics.

[role="screenshot"]
image::images/ecommerce-dashboard.png[OpenTelemetry visualizations]

Let's have a look at how this dashboard was created, specifically the Sales USD and System load visualizations.
Let's look at how this dashboard was created, specifically the Sales USD and System load visualizations.

. In the side navigation, under *{kib}*, select *Dashboard*.
. Open the main menu, then click *Dashboard*.
. Click *Create dashboard*.
. Click *Save*, enter the name of your dashboard, and then click *Save* again.
. Let’s add a Sales USD visualization. Click *Edit*.
Expand All @@ -249,45 +311,5 @@ Let's have a look at how this dashboard was created, specifically the Sales USD
+
Both visualizations are now displayed on your custom dashboard.

IMPORTANT: By default, Discover shows data for the last 15 minutes. If you have a time-based index,
IMPORTANT: By default, Discover shows data for the last 15 minutes. If you have a time-based index
and no data displays, you might need to increase the time range.

[[open-telemetry-elastic-config-ref]]
==== Elastic exporter configuration reference

[float]
[[open-telemetry-config-url]]
==== `apm_server_url`
Elastic APM Server URL. (required)

[float]
[[open-telemetry-config-api-key]]
==== `api_key`
Credential for {apm-server-ref-v}/api-key.html[API key authorization].
Must also be enabled in Elastic APM Server. (optional)

[float]
[[open-telemetry-config-secret-token]]
==== `secret_token`
Credential for {apm-server-ref-v}/secret-token.html[secret token authorization].
Must also be enabled in Elastic APM Server. (optional)

[float]
[[open-telemetry-config-ca-file]]
==== `ca_file`
Root Certificate Authority (CA) certificate, for verifying the server's identity if TLS is enabled. (optional)

[float]
[[open-telemetry-config-cert-file]]
==== `cert_file`
Client TLS certificate. (optional)

[float]
[[open-telemetry-config-key-file]]
==== `key_file`
Client TLS key. (optional)

[float]
[[open-telemetry-config-insecure]]
==== `insecure`
Disable verification of the server's identity if TLS is enabled. (optional)