-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rest-client-mutiny issue, exhausting thread pool #29301
Comments
/cc @Sgitario, @cescoffier, @geoand, @jponge |
That's the expected behavior. We limit the number of outgoing connections to the same endpoints. The Using RequestScope creates a new pool every time; it's inefficient in terms of resource usage. |
BTW, you can increase the number of connections and combine them with |
@cescoffier thank you very much Clement for dedicating time to this. Just to be clear, connections are not concurrent, so I was expecting to reuse the connections, or TTL configuration to drop those connections. |
If the requests are sequential, it should not be a problem. Once the connection is closed or is not used anymore it should be recycled. Do you have reproducer? |
@cescoffier I will create one as soon as I have a short-term fix for my project and put it here. |
@cescoffier Here's a reproducer, I was expecting once I kept it simpler to dissapear, but still there. I have reduced the pool-size to 5, to be easily reproducible.
|
@cescoffier Reminder I attached a reproducer, so we remove the tag ;-) Note: I tried to put it with a proper automated tests failing, but it was taking more time for some reason than the whole reproducer. |
@Chexpir Thanks for the reproducer and it's interesting. Basically, the connections are not recycled because you don't read the body of the response. As your REST Client method returns a I would recommend switching to RESTEasy Reactive and the Reactive Rest Client. The usage of Mutiny in this context is not doing what you think and, as indicated in the log, is deprecated. I'm a bit stopped in my tests as the service you use is now rejecting my requests.... (Too many requests) |
Interesting! so moving to Uni should probably stop the issue? |
If you use the reactive REST client, the issue does not happen. So, it's a bug in the classic REST client. Unfortunately, I won't be able to help on this one. |
Thanks! Sorry, my question was wrong. I meant if moving to Uni<Void> instead of Uni should probably stop the issue? Nevertheless, do you want me to close the issue to clean up the board? Or should I leave it opened? |
I would close it as won't fix - 1) it's a code issue, 2) it's resteasy classic issue that does not happen with resteasy reactive. |
Describe the bug
When using rest-client-mutiny, returning a Uni, it starts failing after configured pool size is exhausted, default appeared to be 50.
quarkus.rest-client.config-key.connection-pool-size=5
connect-timeout,read-timeout,connection-ttl variables don't affect the result.
Sometimes, it manages to do "yet another round" of requests, so if pool-size is N, it appear to get 2*N working.
My workaround for now is set scope=RequestScoped, this issue is reproducible with Singleton, Dependent and ApplicationScoped scopes.
Expected behavior
I was expecting to manage infinite sequencial requests with the pool.
Actual behavior
It does not manage infinite requests, they are silently hanging.
When stopped the service, this is the error we get on each of the hanging threads
}
The text was updated successfully, but these errors were encountered: