-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stricter memory limiter #8694
Comments
I think #8632 would help address this |
I was recently debugging a similar issue and my analysis led me to a slightly different conclusion:
In our case, relevant logs look like this:
Pod restarting:
you can see that explicit GC from memory limiter never ran and there was a burst of data that caused high memory usage between Proposed solution is to get rid of the ballast and start using |
@dloucasfx For the ballast we are considering removal on #8343. I am interested in having end-users test |
Is your feature request related to a problem? Please describe.
In my collector I have a linear pipeline:
receiver -> memory_limiter -> batch -> exporter
, and the exporter has an enabled queue.The queue size is set to a large number, so that the data is never dropped. Instead, I use the memory limiter to refuse the data when there is a congestion on the exporter side (queue grows in size, memory usage grows along).
Existing memory limiter fails to do it properly – sometimes it does not stop accepting new records fast enough – it waits for the GC to free some memory and the collector faces the OOM.
Describe the solution you'd like
I have 2 proposals:
Describe alternatives you've considered
Proposal 1 – fork the existing memory limiter.
Proposal 2 – I could achieve the same behaviour by implementing such approach:
batch
) and return an error there if queue is full.But this solution looks too complex.
The text was updated successfully, but these errors were encountered: