-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connectors/datadogconnector: Increasing Memory That Eventually Kills Collector Pods #30908
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hey we see the release 0.94.0 is available in github since 8 hours ago but the image is not yet present in dockerhub, are the schedules for the two artifacts different? |
The docker image should be available once 0.94.0 gets released in https://github.com/open-telemetry/opentelemetry-collector-releases. See open-telemetry/opentelemetry-collector-releases#472. |
Still happening here too 😢 |
@diogotorres97 we aren't able to reproduce a memory leak in |
@diogotorres97 towards 21h20 in screenshot you shared was there an increase in data sent to the collectors ? Did that time correspond to the spike in requests you mentioned ? |
yes. Usually without spikes it will increase the memory in one day or two, with spikes (it depends) but can grow very fast... |
@diogotorres97 if higher data/ cardinality is being sent, higher memory consumption is expected.
Memory increasing with steady traffic/ cardinality is unexpected. We've been unable to reproduce a memory leak with 0.94.0 with tests of different cardinality/ different traffic. In the scenario where the memory increases under steady traffic, can you please provide us with output traces in json format via the file exporter, graphs showing the steady increase in memory, as well as profiles. Ideally, having two profiles spaced out in a time where memory was increasing. Having these two profiles, we'll be able to see what is growing in memory. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hello @mackjmr . I just wanted to let you know that we have been using the new version and we do not see any memory leaks coming from this component. I am not sure if we can close this issue, or should we wait for more time. Thanks in advance |
Thanks for getting back to us @NickAnge! I think we can close this for now. If the issue comes back, please comment on the issue and we can reopen :) |
Component(s)
connector/datadog
What happened?
Description
In our setup, we've activated both the Datadog connector and exporter to avoid APM stats sampling. We've been experiencing a continuous increase in memory, eventually leading to the pod reaching an Out-of-Memory (OOM) state after a few hours. We followed the suggested configuration from the README.md and have datadog/connector as the receiver for traces.
Steps to Reproduce
Expected Result
Not Memory increase that kills the pod
Actual Result
Memory increase that kills the pod eventually.
Collector version
opentelemetry-collector-contrib:0.88.0
Environment information
Environment
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: