-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No memory leakage in attributes filter #3695
No memory leakage in attributes filter #3695
Conversation
The attributes filter collects seen attributes in order to avoid filtration on the same attribute set. However, the `attribute.Set` is not comparable type and new allocations of sets with same attributes will be considered as new sets. Metrics with a high cardinality of attributes consume a lot of memory even if we set a filter to reduce that cardinality.
Hmm, that's new. I guess if you have more than 10 attributes or those attributes have the new slice values they will compare the pointer values not the underlying values. This is a bug for other parts of the SDK as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable to me.
If I recall correctly this map was added to act as a cache and reduce computation. If removing it helps reduce ever-expanding memory overhead at the cost of having to calculate the filter each time that seems reasonable to me.
@MadVikingGod thoughts?
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #3695 +/- ##
=======================================
- Coverage 79.7% 79.6% -0.1%
=======================================
Files 171 171
Lines 12673 12657 -16
=======================================
- Hits 10103 10085 -18
- Misses 2357 2359 +2
Partials 213 213
|
Aside:
I do not believe this is true. The attribute.Set mechanism constructs a comparable array of KeyValue. For slice-valued Values, this process is carried out recursively -- slices are represented as arrays so they can be comparable. The If there's a bug in this logic, we should fix it -- or was the underlying issue really about accumulation of distinct sets? I take it from the fix in this PR that the underlying issue was true cardinality, not a failure of the attribute.Set logic. I'll be glad to help if there's a problem! |
Indeed #3702 We double checked this and the latest version of |
while waiting for the new 0.37 release, we update otel/sdk/metric package to its current master head to bring in a fix of the memory issue. open-telemetry/opentelemetry-go#3695 later when 0.37 is out we can update all related packages to that version
while waiting for the new 0.37 release, we update otel/sdk/metric package to its current master head to bring in a fix of the memory issue. open-telemetry/opentelemetry-go#3695 later when 0.37 is out we can update all related packages to that version
while waiting for the new 0.37 release, we update otel/sdk/metric package to its current master head to bring in a fix of the memory issue. open-telemetry/opentelemetry-go#3695 later when 0.37 is out we can update all related packages to that version
while waiting for the new 0.37 release, we update otel/sdk/metric package to its current master head to bring in a fix of the memory issue. open-telemetry/opentelemetry-go#3695 later when 0.37 is out we can update all related packages to that version
The attributes filter collects seen attributes in order to avoid filtration on the same attribute set. However, the
attribute.Set
is not comparable type and new allocations of sets with same attributes will be considered as new sets.Metrics with a high cardinality of attributes consume a lot of memory even if we set a filter to reduce that cardinality.