-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement auto-pipelining for all connections [~100% op/s increase] #343
Conversation
I'd say bench is too synthetic compared to real-world. 4MiB buffer & 100 ops in the queue could be there infinitely if app is under no load. So you absolutely must add a timed flush. Imagine what would happen if the command is stuck in the queue for minutes because the buffer doesn't reach 100th command?
Given all those points auto-pipeline could be cool, but it's way more complex than this as it seems to me |
|
@Salakar cool, missed sense of this line initially:
awesome 👍
Thanks for clarification, all makes much more sense now! |
@AVVS no worries, if we need the manual pipelines to be sent in their own |
@Salakar 1-2 ticks shouldn't really matter for a manual pipeline IMO, but force write is handy for such cases |
Performance halves on remote servers, wut. Investigating... |
Great work! Writable stream offers the |
@luin It's a lot more performant without cork and uncork, bench with cork/uncork: Still investigating the remote issue, seems to just make it worse... might make an issue on node repo. |
@Salakar Can you just disable this by default, and add an option that enables auto-pipelining? This will allow those who need this for local connections to get the benefit from your PR? |
@pavel I'm not sure if it's worth it just for local connections - would be good if I could get responses on the nodejs issues I created re this but had nothing from anyone there, it's weird how this drastically drops if remote connection. I have a feeling maybe it's using unix socket locally maybe, dunno, need to investigate, just not got the time at the moment. Soon ™️ |
Wouldn’t you get a lot of cross-slots errors if you do multi-key ops against a redis cluster? |
@lo1tuma I did this at the raw connection level, so not an issue, each individual server connection has it's own pipeline. |
Closing my PR until i can re-look at this at a later date. |
@luin Woo, looks like this is fixed in node now - nodejs/node#5095 Might be worth re-opening and checking the benchmarks |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 7 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
Description
This implements auto-pipelining.
Took the route of least change and implemented it directly in the Connector class for a per redis server basis, this is done by intercepting
stream.write()
with a bespoke function to handle auto-pipelining.This applies to all types of connections, including Sentinel and clusters.
How it works
When stream write is called the proxy function starts a string buffer and keeps track of the number of writes currently buffered, if the number of writes buffered exceeds
MAX_QUEUED
or if the size of the buffered string exceedsMAX_BUFFER_SIZE
then the pipeline buffer is written to the stream.The above limits are on a per event loop tick basis - so at the start of every node event loop if there is anything buffered it will immediately get written to the stream.
Benchmarks
These benchmarks were done against a local redis server. The benefits of this change are even more noticeable against remote redis servers as the network latency is greater - pipelining reduces round trips.
Notes
.write()
that if set totrue
will force a write..connect()
?Obligatory gif.