Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
  • Loading branch information
bogdandrutu committed Mar 9, 2021
1 parent 29574e9 commit 102bc06
Showing 1 changed file with 42 additions and 13 deletions.
55 changes: 42 additions & 13 deletions exporter/stackdriverexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,20 @@ The following configuration options are supported:
- `number_of_workers` (optional): NumberOfWorkers sets the number of go rountines that send requests. The minimum number of workers is 1.
- `resource_mappings` (optional): ResourceMapping defines mapping of resources from source (OpenCensus) to target (Stackdriver).
- `label_mappings` (optional): Optional flag signals whether we can proceed with transformation if a label is missing in the resource.
- `retry_on_failure` (optional): Configuration for how to handle retries when sending data to Google Cloud fails.
- `enabled` (default = true)
- `initial_interval` (default = 5s): Time to wait after the first failure before retrying; ignored if `enabled` is `false`
- `max_interval` (default = 30s): Is the upper bound on backoff; ignored if `enabled` is `false`
- `max_elapsed_time` (default = 120s): Is the maximum amount of time spent trying to send a batch; ignored if `enabled` is `false`
- `sending_queue` (optional): Configuration for how to buffer traces before sending.
- `enabled` (default = true)
- `num_consumers` (default = 10): Number of consumers that dequeue batches; ignored if `enabled` is `false`
- `queue_size` (default = 5000): Maximum number of batches kept in memory before data; ignored if `enabled` is `false`;
User should calculate this as `num_seconds * requests_per_second` where:
- `num_seconds` is the number of seconds to buffer in case of a backend outage
- `requests_per_second` is the average number of requests per seconds.

Additional configuration for the trace exporter:

- `trace.bundle_delay_threshold` (optional): Starting from the time that the first span is added to a bundle, once this delay has passed, handle the bundle. If not set, uses the exporter default.
- `trace.bundle_count_threshold` (optional): Once a bundle has this many spans, handle the bundle. Since only one span at a time is added to a bundle, no bundle will exceed this threshold, so it also serves as a limit. If not set, uses the exporter default.
- `trace.bundle_byte_threshold` (optional): Once the number of bytes in current bundle reaches this threshold, handle the bundle. This triggers handling, but does not cap the total size of a bundle. If not set, uses the exporter default.
- `trace.bundle_byte_limit` (optional): The maximum size of a bundle, in bytes. Zero means unlimited.
- `trace.buffer_max_bytes` (optional): The maximum number of bytes that the Bundler will keep in memory before returning ErrOverflow. If not set, uses the exporter default.
Note: These `retry_on_failure` and `sending_queue` are provided (and documented) by the [Exporter Helper](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/exporterhelper#configuration)

Additional configuration for the metric exporter:

Expand Down Expand Up @@ -48,12 +54,15 @@ exporters:
- source_key: source.label1
target_key: target_label_1

trace:
bundle_delay_threshold: 2s
bundle_count_threshold: 50
bundle_byte_threshold: 15e3
bundle_byte_limit: 0
buffer_max_bytes: 8e6
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 120s
sending_queue:
enabled: true
num_consumers: 2
queue_size: 50

metric:
prefix: prefix
Expand All @@ -70,3 +79,23 @@ following proxy environment variables:
If set at Collector start time then exporters, regardless of protocol,
will or will not proxy traffic as defined by these environment variables.
# Recommendations
It is recommended to always run a [batch processor](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor)
and [memory limiter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/memorylimiter) for tracing pipelines to ensure
optimal network usage and avoiding memory overruns. You may also want to run an additional
[sampler](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/probabilisticsamplerprocessor), depending on your needs.
# Deprecatations
The previous trace configuration (v0.21.0) has been deprecated in favor of the common configuration options available in OpenTelemetry. These will cause a failure to start
and should be migrated:
- `trace.bundle_delay_threshold` (optional): Use `batch` processor instead ([docs](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor)).
- `trace.bundle_count_threshold` (optional): Use `batch` processor instead ([docs](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor)).
- `trace.bundle_byte_threshold` (optional): Use `memorylimiter` processor instead ([docs](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/memorylimiter))
- `trace.bundle_byte_limit` (optional): Use `memorylimiter` processor instead ([docs](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/memorylimiter))
- `trace.buffer_max_bytes` (optional): Use `memorylimiter` processor instead ([docs](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/memorylimiter))

0 comments on commit 102bc06

Please sign in to comment.