-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak with Doobie + Task #934
Comments
I ran both example projects for half an hour and both of them seem to have slowly increasing old gen. Maybe the reason it is visible in the graph is due to Doobie GCing 5x more often? Quill
Doobie
|
Hmm, I can't really replicate your results, what OS are you running on? I will also try and run the tests for a longer period of time. |
I am running Linux 5.1.15 and using GraalVM
|
Hmm I am not using GraalVM, instead I was using OpenJDK 11 (also on linux). When I get home I will run |
This is what I get running Java 11 with these java options:
|
Okay I get the same results with the changes. It's because the connect executor has an unbound queue and the number of produced database operations exceeds the number that can be processed. I seem to be able to process ~3,900 operations a second but there are In production I run with a bounded queue
and handle the |
Okay so question is we are having this leak in production and we have no way near that amount of transactions per second (we typically get 200 transactions per second max, lets say 500 max to be generous). I am going to tweak the connect executor to see if I can reproduce but I think its something more. Furthermore for whatever reason, Quill doesn't have this problem at all. It can handle this load and it also doesn't slow down |
When I run the quill example I get a |
Hmm I don't get this, I even posted the graph which was running for 20 minutes, let me run once more |
Strange. There are over 11,000 monix-io threads listed in VisualVM when it crashes. |
Latest run, also running the suite on a friends laptop and its working fine. In any case, this is somewhat concerning because it appears that what you are saying is that Doobie can't process transactions as fast as they are incoming (talking about our production server where we are experiencing the problem, not this reproduction repo which is deliberately running a lot of transactions to speed up the leak). Using a bounded queue isn't really an option because we are then dropping transactions. |
Okay so I ran val transactEC: ExecutionContext = ExecutionContext.fromExecutor(
new ThreadPoolExecutor(
100,
Integer.MAX_VALUE,
60,
TimeUnit.SECONDS,
new LinkedBlockingQueue[Runnable](1000)
)
) And I am still getting the problem, Also here is a screenshot of it Not sure what is happening with Quill, maybe its specific to OpenJDK? In any case there are still problems with Doobie even when using a bounded queue (or should I use more appropriate settings?) EDIT: Forgot the second parameter, let me re-run |
Yeah I have no idea. If I run with
I get this I have removed references to
|
Okay thanks, I will try and replicate this tomorrow. Maybe its the timeout of 60 seconds that is causing the queue to fill. This still begs the question though, having an unbounded queue to me seems like a bandaid because essentially this is saying that Doobie can't process transactions as fast as its receiving them. This is understandable for the repo which has the isolated leak but for production we honestly do not have that much load and if I understand correctly I definitely do not want to be dropping transactions on a bounded queue because its filling up (and also I am curious as to why Quill is not exhibiting this behavior, at least when you slow down the transaction frequency in the production repo) |
Note that this may be related to #913
I finally managed to diagnose a memory leak that is happening with Doobie +
monix.eval.Task
/cats.effect.IO
. Note that this only occurs with Doobie.The repo reproducing the leak can be found here https://github.com/mdedetrich/task-doobie-memory-leak
The text was updated successfully, but these errors were encountered: