You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Building with ocb 0.109.0 and a otelcol with all component as well to 0.109.0 produces below error
ocb build for 0.109.0 fails: mismatch in go.mod and builder configuration versions: component "go.opentelemetry.io/collector/confmap/provider/envprovider" version calculated by dependencies "v1.15.0" does not match configured version "v0.109.0"
extensions:
# zPages extension enables in-process diagnosticszpages:
endpoint: "${env:HOST_IP}:55679"# Health Check extension responds to health check requestshealth_check:
endpoint: ${env:HOST_IP}:8081 # health check result is pushed to 8081 port and this is referred in livenessProbe of the collector appoauth2client:
client_id: ${env:TM_CLIENT}client_secret: ${env:TM_SECRET}token_url: https://XXXXXXX/v1/tokenscopes: ["openid", "audience:server:client_id:${env:TM_AUDIENCE}"]# tls settings for the token clienttls:
insecure: true# ca_file: /data/local/otel/ca.pem#cert_file: certfile#key_file: keyfile# timeout for the token clienttimeout: 30sopamp:
server:
ws:
endpoint: ${env:OPAMP_ENDPOINT}tls:
insecure_skip_verify: truecapabilities:
reports_effective_config: truereceivers:
hostmetrics:
collection_interval: 10s# The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.scrapers:
paging:
metrics:
system.paging.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.cpu:
metrics:
system.cpu.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.memory:
load:
cpu_average: truenetwork:
#<include|exclude>:# interfaces: [ <interface name>, ... ]# match_type: <strict|regexp>processes:
process:
#<include|exclude>:# names: [ <process name>, ... ]# match_type: <strict|regexp># https://github.com/open-telemetry/opentelemetry-collector/issues/3004# Can mute the process exe error by setting mute_process_exe_error: true.# By default (if this config not provided) mute_process_exe_error is set to false. https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/21665mute_process_name_error: truemute_process_exe_error: truemute_process_io_error: true#scrape_process_delay: <time> hostmetrics/disk:
collection_interval: 3mscrapers:
disk: # This metric is required to get correct infrastructure metrics in Datadog.#<include|exclude>:# devices: [ <device name>, ... ]# match_type: <strict|regexp>filesystem:
metrics:
system.filesystem.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.#<include_devices|exclude_devices>:# devices: [ <device name>, ... ]# match_type: <strict|regexp>#<include_fs_types|exclude_fs_types>:# fs_types: [ <filesystem type>, ... ]# match_type: <strict|regexp>#<include_mount_points|exclude_mount_points>:# mount_points: [ <mount point>, ... ]# match_type: <strict|regexp> otlp:
protocols:
grpc:
endpoint: "${env:HOST_IP}:4317"http:
endpoint: "${env:HOST_IP}:4318"# scrape collector's own metrics and pushprometheus/otelcol:
config:
scrape_configs:
- job_name: 'otelcol'scrape_interval: 10sstatic_configs:
- targets: ['${env:HOST_IP}:8888']processors:
resourcedetection: # Queries the host machine to retrieve the following resource attributes: host.name, host.id, os.type,detectors: [env, system]timeout: 10soverride: false# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.80.0/processor/cumulativetodeltaprocessor cumulativetodelta:
batch/metrics:
# Datadog APM Intake limit is 3.2MB. Let's make sure the batches do not go over that.send_batch_max_size: 1000# (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.send_batch_size: 100# (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.timeout: 10s# (default = 5s): Maximum time to wait until the batch is sent. The default value is 5s.batch/traces:
send_batch_max_size: 1000# (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.send_batch_size: 100# (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.timeout: 5s# (default = 5s): Maximum time to wait until the batched traces are sent. The default value is 5s.batch/logging:
send_batch_max_size: 1000# (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.send_batch_size: 100# (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.timeout: 30s# (default = 5s): Maximum time to wait until the batched logs are sent. The default value is 5s.memory_limiter:
check_interval: 10# (default = 0s): Time between measurements of memory usage. The recommended value is 1 second. If the expected traffic to the Collector is very spiky then decrease the check_interval or increase spike_limit_mib to avoid memory usage going over the hard limit.limit_mib: 100# (default = 0): Maximum amount of memory, in MiB, targeted to be allocated by the process heap. Note that typically the total memory usage of process will be about 50MiB higher than this value. This defines the hard limit.spike_limit_mib: 2# (default = 20% of limit_mib): Maximum spike expected between the measurements of memory usage. The value must be less than limit_mib. The soft limit value will be equal to (limit_mib - spike_limit_mib). The recommended value for spike_limit_mib is about 20% limit_mib.#limit_percentage: 75 #(default = 0): Maximum amount of total memory targeted to be allocated by the process heap. This configuration is supported on Linux systems with cgroups, and it's intended to be used in dynamic platforms like docker. This option is used to calculate memory_limit from the total available memory. For instance setting of 75% with the total memory of 1GiB will result in the limit of 750 MiB. The fixed memory setting (limit_mib) takes precedence over the percentage configuration.#spike_limit_percentage: 25 #(default = 0): Maximum spike expected between the measurements of memory usage. The value must be less than limit_percentage. This option is used to calculate spike_limit_mib from the total available memory. For instance setting of 25% with the total memory of 1GiB will result in the spike limit of 250MiB. This option is intended to be used only with limit_percentage.# Adjust the number of workers based on the available resources on your device# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/attributesprocessorattributes:
actions:
# The following inserts a new attribute {"env": nonprod,prod} to spans where# the key "dev" doesn't exist.# The type of `dev` is inferred by the configuration.# `nonprod` or `prod` is a string and is stored as a string in the attributes.# The value of env will be retrieved from the CCC http api ncp app configuration# by extracting the "env" string from url http://*.corpinter.net/otlp-http/{env}# not have been sent by all clients.
- key: "env"value: "${env:ENVIRONMENT}"# static until CCC integration worksaction: upsert# The following uses the value from attribute "region" to insert a new# attribute {"region": emea,amap,china>} to spans# where the key "region" does not exist. If the attribute "region"# doesn't exist, no new attribute is inserted to spans.# The value of "region" will be retrieved from the ambient service layer
- key: "region"value: "${env:REGION}"# static until ambient service layer integration worksaction: upsert
- key: tagsvalue: ["env:${env:ENVIRONMENT}", "geo:${env:GEO}"]action: upsert#include:# match_type: strict# # The log severity number defines how to match against a log record's# # SeverityNumber, if defined.# # This is an optional field.# log_severity_number:# # Min is the lowest severity that may be matched.# # e.g. if this is plog.SeverityNumberInfo, # # INFO, WARN, ERROR, and FATAL logs will match.# min: ERROR# # MatchUndefined controls whether logs with "undefined" severity matches.# # If this is true, entries with undefined severity will match.# match_undefined: falseresource:
attributes:
- key: envvalue: ${env:ENVIRONMENT}action: upsert
- key: geovalue: ${env:GEO} # chinaaction: upsert
- key: tagsvalue: ["env:${env:ENVIRONMENT}", "geo:${env:GEO}"]action: upsert# Filter out logs below INFO (no DEBUG or TRACE level logs),# retaining logs with undefined severityfilter/severity_number:
logs:
include:
severity_number:
min: "INFO"match_undefined: true# modifying the OpenTelemetry Collector that was previously deployed with ebpf-net to use the# metricstransform processor to aggregate metrics named ebpf_net.message by aggregating away all labels# except 'message' and 'module' using summation.metricstransform:
transforms:
- include: ebpf_net.messageaction: updateoperations:
- action: aggregate_labelsaggregation_type: sumlabel_set:
- message
- module# get metrics about the collector traces by connecting trace exporter to metric receiver producing trace collector metrics#connectors:#spanmetrics:# histogram:# explicit:# buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]# dimensions:# - name: http.method# default: GET# - name: http.status_code# dimensions_cache_size: 1000# aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"exporters:
otlphttp:
# https://docs.mbos.mercedes-benz.com/develop/in-car-apps/development-guides/container-development/environment-variables/endpoint: otlp.dvb.corpinter.net:4317 # CCC reverse proxy whitelist subdomain. Replace this with below endpoint.read_buffer_size: 50# (default = 0): Maximum size of the in-memory buffer. If unset or 0, defaults to 1/3 of the process memory limit.write_buffer_size: 50retry_on_failure: # (default = true): Whether to retry sending batches that failed to be sent to the next consumer due to a permanent error. The retry is done on a different go-routine, not blocking the next consumer. The retry queue is bounded and new batches will be dropped if the queue is full. The retry is done with an exponential backoff and the following settings can be used to customize it:enabled: trueinitial_interval: 10s# Initial retry interval. The default value is 5s.randomization_factor: 0.7multiplier: 1.3max_interval: 300s# Maximum amount of time spent trying to send a batch. The default value is 300s.max_elapsed_time: 10m# retry_interval: 5s # (default = 5s): Initial retry interval. The default value is 5s.# max_retry_time: 300s # (default = 300s): Maximum amount of time spent trying to send a batch. The default value is 300s.# enable_disk_queue: false # (default = false): Whether to enable disk spilling. If the value is true, the disk queue is enabled.sending_queue:
enabled: false # Whether to enable disk spilling. If the value is true, the disk queue is enabled.num_consumers: 2queue_size: 10# Maximum size of the disk queue in MB. If the value is 0, the disk queue is disabled.compression: zstd # (default = gzip): Compression type to use among gzip, snappy, zstd, and none. https://github.com/open-telemetry/opentelemetry-collector/blob/main/config/configgrpc/README.md#compression-comparisonservice:
extensions: [zpages, health_check, oauth2client, opamp]telemetry:
# collect metrics about collector itselfmetrics:
address: '${env:HOST_IP}:8888'# debug if collector is not workinglogs:
level: ${env:LOG_LEVEL}pipelines:
traces:
receivers: [otlp]processors: [memory_limiter, batch/traces, attributes, resource]exporters: [otlphttp]metrics:
receivers: [otlp]processors: [memory_limiter, batch/metrics, attributes, resource]exporters: [otlphttp]metrics/hostmetrics:
receivers: [hostmetrics, hostmetrics/disk, prometheus/otelcol]processors: [resourcedetection, batch/metrics, attributes, resource]exporters: [otlphttp]logs:
receivers: [otlp]processors: [memory_limiter, filter/severity_number, batch/logging, attributes, resource]exporters: [otlphttp]
Log output
Using architecture amd64
Using operating system linux
go version go1.22.4 linux/amd64
Using /usr/local/go/bin//go to compile the distributions.
Using builder config cta/builder-prod.yaml
CGO_ENABLED=0
2024-09-13T00:46:46.683Z INFO internal/command.go:125 OpenTelemetry Collector Builder {"version": "(devel)"}
2024-09-13T00:46:46.686Z INFO internal/command.go:161 Using config file {"path": "cta/builder-prod.yaml"}
2024-09-13T00:46:46.686Z INFO builder/config.go:142 Using go {"go-executable": "/usr/local/go/bin/go"}
2024-09-13T00:46:46.688Z INFO builder/main.go:101 Sources created {"path": "bin/collector"}
Error: mismatch in go.mod and builder configuration versions: component "go.opentelemetry.io/collector/confmap/provider/envprovider" version calculated by dependencies "v1.15.0" does not match configured version "v0.109.0". Use --skip-strict-versioning to temporarily disable this check. This flag will be removed in a future minor version
Additional context
No response
The text was updated successfully, but these errors were encountered:
Component(s)
No response
What happened?
Building with ocb 0.109.0 and a otelcol with all component as well to 0.109.0 produces below error
ocb build for 0.109.0 fails: mismatch in go.mod and builder configuration versions: component "go.opentelemetry.io/collector/confmap/provider/envprovider" version calculated by dependencies "v1.15.0" does not match configured version "v0.109.0"
See builder.yaml
Collector version
0.109.0
Environment information
Environment
Ubuntu 22.04
go1.22.4
amd64
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: