-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update OTEL dependencies to latest release candidates #620
Conversation
Codecov Report
@@ Coverage Diff @@
## main #620 +/- ##
==========================================
+ Coverage 66.25% 66.27% +0.02%
==========================================
Files 36 36
Lines 4264 4276 +12
==========================================
+ Hits 2825 2834 +9
- Misses 1312 1314 +2
- Partials 127 128 +1
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lrascao, thanks for submitting this. However, we would like to wait until the Go libraries are released as stable rather than merge the rc branches (release candidates aren't technically stable, yet). There's some more discussion in a similar PR #611
If you don't mind waiting and updating this PR once these tags are published, that would be really helpful. Otherwise, we would like to hold off on merging rc dependencies unless necessary
sure, i'll move this to draft until then |
v1.15 is tagged as stable now, any update on when this will be merged? |
Looks like we need to wait for folks at https://github.com/open-telemetry/opentelemetry-collector to cut a new release that also bumps to |
Looks like they did just one hour after your post 😄 |
Apparently there's a regression in the google managed prometheus metric naming convention:
|
@lrascao is this still a draft or is it ready for review? Not sure if you were working on the regression you mentioned in your last comment |
Digging around it looks like this is caused by open-telemetry/opentelemetry-collector-contrib#20519. Looking into it |
yeah, not really sure on that path forward here, adapt to the change or request rollback of the regression |
Talked to the team, and it looks like we should accept the change in default naming to match the Prometheus specification. In other words, just update the failing tests to expect the new names (with This will be a breaking change, but it can be disabled using the feature gate flag from the upstream change ( |
alright, updating |
go.opentelemetry.io/otel v1.15.1/v0.38.1 go.opentelemetry.io/contrib v1.16.0/v0.41.0 go.opentelemetry.io/collector v1.0.0/v0.76.1 Signed-off-by: Luis Rascao <luis.rascao@gmail.com>
ping @damemi ready for a GCB run |
/gcbrun |
tests need a retry it seems |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just want to add that test back
I just found open-telemetry/opentelemetry-collector-contrib#21743, which also discusses the changes in metric name normalization. There might be some upstream work to do with this first |
Could it upgrade to v1.16.0/v0.39.0 instead to avoid another upgrade? https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.16.0 |
/gcbrun |
We are pretty close with this one, so given the number of changes I think it would be best to update incrementally if possible. So, getting this green then immediately following with another bump would be better imo. Looking into why some of the unit tests are failing |
/gcbrun |
@damemi needs a splash of |
/gcbrun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all of your help @lrascao!
@@ -152,13 +152,13 @@ func convertHistogram(h pmetric.Histogram) metricdata.Aggregation { | |||
if h.DataPoints().Len() == 0 { | |||
return nil | |||
} | |||
agg := metricdata.Histogram{ | |||
agg := metricdata.Histogram[float64]{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a note, I think it is fine that we are only handling float64 histograms here, because
- this code is just handling testcases.
Sum
is the only value used here that can be int or float inmetricdata.Histogram[N]
(we don't read Min or Max) andSum()
always returns a float64, so there shouldn't be any runtime issues with this.- We force
float64
when we handle Sum anyway, which is consistent with the behavior prior to this update and therefore not breaking.
However, we should follow this up with support for int64
histograms. Otherwise, users who switch to the GCP exporter might see slightly different values since we force floats in the output calculation. That would include adding a test case for an int64 histogram, which we don't appear to have currently. Will make a separate issue to track this.
Relevant changes required from RC changelog:
Exemplar
to metricdata package open-telemetry/opentelemetry-go#3849)Export
interface to accept a*ResourceMetrics
instead ofResourceMetrics
open-telemetry/opentelemetry-go#3853)Also of note
go.opentelemetry.io/otel/metric
is now stable as of v1.15.0-rc2fixes #620