-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[prometheusremotewrite] invalid temporality and type combination when remote write to thanos backend #15281
Comments
Pinging code owners: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
The problem is reproduced also with VictoriaMetrics as the backend |
Clarification The error occurs because https://github.com/open-telemetry/opentelemetry-go sends metrics that have 0 DataPoints |
Is there an existing issue on the OTel-Go repo that can be linked here? If not, can you create one with steps to reproduce? |
Exactly @Aneurysm9 I make issue for https://github.com/open-telemetry/opentelemetry-go |
I'm having the same issue with
|
@montag do you use open-telemetry library https://github.com/open-telemetry/opentelemetry-go? Try update version to latest |
Thanks, @krupyansky. I'm using the otel collector helm chart via terraform. |
@montag are you sending metrics from your application to otel collectore via https://github.com/open-telemetry/opentelemetry-go? |
@krupyansky I'm using the python open-telemetry instrumentation libs to send to the otel collector (otlp receiver), which then uses the prometheusremotewrite exporter to push to prom. I see the above error in the collector logs every few minutes. |
@montag try to write issue to the python open-telemetry instrumentation libs like my issue open-telemetry/opentelemetry-go#3394 Most likely the error occurs in your case because the python open-telemetry sends metrics that have 0 DataPoints |
@krupyansky Any idea how I might verify that? |
In same boat as @montag - No idea how we could be sending metrics with 0 Datapoints. |
Hi! Could you please clarify, do we have some showstoppers to upgrade |
That dependency is on the |
Thank you for the answer! Sorry, misunderstand the discussion, now i get it. I have the same issue with Kong statsd plugin |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I have the same issue with opentelemetry-collector-contrib 0.81.0
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
We were experiencing this for the python metrics instrumentation. You can resolve it with a filter processor:
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Any updates? I've the same issue of @montag: I'm using:
|
Getting the same with the most recent version of the collector:
|
…lure to translate metrics (#29729) Don't drop a whole batch in case of failure to translate from Otel to Prometheus. Instead, with this PR we are trying to send to Prometheus all the metrics that were properly translated and create a warning message for failures to translate. This PR also adds supports to telemetry in this component so that it is possible to inspect how the translation process is happening and identify failed translations. I opted to not include the number of time series that failed translation because I don't want to make assumptions about how the `FromMetrics` function works. Instead we are just publishing if there was any failure during the translation process and the number of time series returned. **Link to tracking Issue:** #15281 **Testing:** UTs were added to account for the case that you have mixed metrics, with some succeeding the translation and some failing. --------- Signed-off-by: Raphael Silva <rapphil@gmail.com> Co-authored-by: Anthony Mirabella <a9@aneurysm9.com> Co-authored-by: bryan-aguilar <46550959+bryan-aguilar@users.noreply.github.com> Co-authored-by: Bryan Aguilar <bryaag@amazon.com>
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
What happened?
Description
I tried to use promethrus remote write to thanos backend and then display metrics on Grafana. I found there are many errors in otel collector log like "Permanent error: invalid temporality and type combination", In result, thanos lacks many metrics used in Grafana dashboard, any idea or solution about this?
Steps to Reproduce
Expected Result
Actual Result
Collector version
0.61.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
No response
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: