-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM caused by span metrics connector #21290
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@albertteoh @kovrus please help |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@alburthoffman apologies for the slow response. I suggest trying the following:
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@alburthoffman please try these and get back to @albertteoh |
@albertteoh the old span processor can handle the same config without OOM issue. So it should not be a config issue. |
Thanks for confirming, @alburthoffman. Is this something that's can be reproduced locally by any chance? |
@albertteoh I have done a quick round of testing for this one. When we enable histogram metrics in spanmetrics connector, the memory usage is very high. I tried disabling the histogram metrics and memory usage was very low. Reducing the number of buckets didn't help much. Memory usage reduced a bit but didn't help much. Can you please help identify the root cause? |
@alburthoffman does this issue persist with the latest release? Related, I wonder if further reducing/aggregating on fewer resource attributes would help, see: #29711 |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue is resolved. We have added a config to limit the number of exemplars to be added for sum metrics in spanmetrics connector. It's merged and deployed. @alburthoffman We can close this issue |
Thx @aishyandapalli |
Component(s)
connector/spanmetrics
Describe the issue you're reporting
We tried to switch from span metrics processor to span metrics connector, and found OOM issue.
Below is the pod heap memory after using span metrics connector. The pod traffic is around 20K spans per second:
before this, span metrics processor is quite stable.
the profile shows the pmap takes lots of memory for span metrics connector.
which is in
createAtrributes
.The text was updated successfully, but these errors were encountered: