We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exporter/prometheus
Even I have set the metric_expiration to 1m. Prometheus exporters still presents the old metrics. Even killed pods couple of hour ago.
otel/opentelemetry-collector-contrib:0.102.0
K8s
apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: collector-deployment namespace: my-ns spec: mode: deployment podAnnotations: sidecar.istio.io/inject: "false" prometheus.io/port: "8889" replicas: 2 resources: requests: memory: "128Mi" cpu: "250m" limits: memory: "1Gi" cpu: "1" config: receivers: otlp: protocols: grpc: {} http: {} processors: transform/drop: trace_statements: - context: span statements: - delete_key(resource.attributes, "process.command_args") memory_limiter: check_interval: 1s limit_percentage: 80 spike_limit_percentage: 20 batch: {} filter/drop_actuator: error_mode: ignore traces: span: - attributes["net.host.port"] == 9001 connectors: spanmetrics: events: enabled: true dimensions: - name: exception.type - name: exception.message exporters: debug: verbosity: detailed otlp/jaeger: endpoint: "jaeger-collector.jaeger.svc.cluster.local:4317" tls: insecure: true prometheus: endpoint: "0.0.0.0:8889" metric_expiration: 80s enable_open_metrics: true add_metric_suffixes: true send_timestamps: true resource_to_telemetry_conversion: enabled: true extensions: health_check: {} service: telemetry: logs: level: "info" extensions: [health_check] pipelines: traces: receivers: [otlp] processors: [memory_limiter, transform/drop, filter/drop_actuator, batch] exporters: [spanmetrics, otlp/jaeger] metrics: receivers: [spanmetrics] processors: [memory_limiter, batch] exporters: [prometheus]
No response
The text was updated successfully, but these errors were encountered:
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Sorry, something went wrong.
Can you use the debug exporter to confirm that you aren't still receiving the metrics in question?
debug
@jmichalek132
dashpole
No branches or pull requests
Component(s)
exporter/prometheus
What happened?
Description
Even I have set the metric_expiration to 1m. Prometheus exporters still presents the old metrics. Even killed pods couple of hour ago.
Collector version
otel/opentelemetry-collector-contrib:0.102.0
Environment information
Environment
K8s
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: