-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request timers are started too early #155
Labels
Comments
@jonmeredith looks like you wrote this about four years ago. Any recollection whether it was designed this way on purpose? |
Merged
reiddraper
added a commit
that referenced
this issue
Feb 18, 2014
As described in #156, there are several types of timeouts in the client. The timeout that is generally provided as the last argument to client operations is used to create timers which prevent us from waiting for every on messages for TCP data (from gen_tcp). There are several cases where this timeout was hardcoded to infinity. This can cause the client to hang on these requests for a (mostly) unbounded time. Even when using a gen_server timeout, the gen_server itself will continue to wait for the message to come, with no timeout. Further, because of #155, we simply use the `ServerTimeout` as the `RequestTimeout`, if there is not a separate `RequestTimeout`. It's possible that the `RequestTimeout` can fire before the `ServerTimeout` (this timeout is remote), but we'd otherwise just be picking some random number to be the difference between them. Addressing #155 will shed more light on this.
reiddraper
added a commit
that referenced
this issue
Feb 18, 2014
Backport of 5aa1ab0 As described in #156, there are several types of timeouts in the client. The timeout that is generally provided as the last argument to client operations is used to create timers which prevent us from waiting for every on messages for TCP data (from gen_tcp). There are several cases where this timeout was hardcoded to infinity. This can cause the client to hang on these requests for a (mostly) unbounded time. Even when using a gen_server timeout, the gen_server itself will continue to wait for the message to come, with no timeout. Further, because of #155, we simply use the `ServerTimeout` as the `RequestTimeout`, if there is not a separate `RequestTimeout`. It's possible that the `RequestTimeout` can fire before the `ServerTimeout` (this timeout is remote), but we'd otherwise just be picking some random number to be the difference between them. Addressing #155 will shed more light on this.
@reiddraper Has this been adequately resolved by #156 and #160? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
riakc_pb_socket
uses{active, once}
to receive TCP data as messages. In order to support timeouts on reading this data, it also sends itself messages usingerlang:send_after
. Since only one request can be outstanding at once, concurrent requests are queued up, and processed FIFO. However, the timer for an individual request is started when the request is queued, not when it is actually sent to Riak. The problem is that we start the timer (send_after) inside ofnew_request
, when this request might just be queued. This has two consequences:timeout
.This may actually be on purpose, but to me it conflates a TCP read timeout with an 'overall request' timeout, which would include time spent waiting in the queue.
The text was updated successfully, but these errors were encountered: