-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[receiver/Prometheus] inconsistent timestamps on metric points error #32186
Comments
Pinging code owners for receiver/prometheus: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Potentially related: #22096 |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I haven't seen it before. Can you share more about your setup? Is |
Is the issue happening for any other metrics? |
Hi Team, I am also facing the same issue
Note: I am fetching the CAdvisor metrics from kubelet API |
What collector versions are you using? |
@imrajdas can you share your prometheus receiver config? |
This might be an issue with metrics that set an explicit timestamp in the exposition. |
Collector version- otel/opentelemetry-collector-contrib:0.96.0 (I have used the latest one also- 0.102.0, still the same issue)
|
I have built a custom exporter called the Kubelet API to get the CAdvisor metrics and expose them. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hi, having same issue. Does anyone had solved this to get rid of error message ? |
After having a discussion in slack . it pointed me to the right direction. As explained in message, a "container_id" label was dropped. In my case, i also found that an "id" label as dropped. As i removed this labeldrop, the issue disappeared. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
receiver/prometheus
What happened?
Description
OpenTelemetryCollector pod shows below error :
2024-04-03T06:55:00.063Z warn internal/transaction.go:149 failed to add datapoint {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "error": "inconsistent timestamps on metric points for metric prober_probe_total", "metric_name": "prober_probe_total", "labels": "{name="prober_probe_total", _source="", env="int", instance="", job="prometheus-self"}"}
otel collecotr config:
receivers: prometheus: config: global: external_labels: _source: "**example**" scrape_configs: - job_name: prometheus-self scrape_interval: 1m scrape_timeout: 10s metrics_path: /federate scheme: http honor_labels: false enable_http2: true kubernetes_sd_configs: - role: service namespaces: own_namespace: false names: - **example** selectors: - role: service label: "***example***" params: 'match[]': - '{__name__="prober_probe_total"}'
Is this a known bug ? Currently this warning is just present all over the log even though the metric gets exported fine but we would like to solve this issue .
Expected Result
No error warnings in the logs .
Actual Result
internal/transaction.go:149 failed to add datapoint {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "error": "inconsistent timestamps on metric points for metric prober_probe_total", "metric_name": "prober_probe_total", "labels": "{name="prober_probe_total", _source="", env="int", instance="", job="prometheus-self"}"}
Collector version
v0.93.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: