Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement auto-pipelining for all connections [~100% op/s increase] #343

Closed
wants to merge 1 commit into from

Conversation

Salakar
Copy link
Collaborator

@Salakar Salakar commented Jul 2, 2016

Description

This implements auto-pipelining.

Took the route of least change and implemented it directly in the Connector class for a per redis server basis, this is done by intercepting stream.write() with a bespoke function to handle auto-pipelining.

This applies to all types of connections, including Sentinel and clusters.

How it works

When stream write is called the proxy function starts a string buffer and keeps track of the number of writes currently buffered, if the number of writes buffered exceeds MAX_QUEUED or if the size of the buffered string exceeds MAX_BUFFER_SIZE then the pipeline buffer is written to the stream.

The above limits are on a per event loop tick basis - so at the start of every node event loop if there is anything buffered it will immediately get written to the stream.

Benchmarks

These benchmarks were done against a local redis server. The benefits of this change are even more noticeable against remote redis servers as the network latency is greater - pipelining reduces round trips.

image

Notes

  • The limits are hard set at the moment, should we allow user land to change these? They're very hit and miss, took a lot of trying to get the best results.
  • Should we allow turning this off? Can't see a reason why this should be off, ever - it's more efficient and a recommended redis practice? I have however added a second param to .write() that if set to true will force a write.
  • Cleaned up the whole connector class and moved the connection options into the constructor - no need to re-create these on every .connect()?
  • Probably worth mentioning in the readme that ioredis has this feature and does this by default?

Obligatory gif.
pew pew

@Salakar
Copy link
Collaborator Author

Salakar commented Jul 2, 2016

image

These two failing on travis - think it's a random connection issue, seen connection is closed.

@AVVS
Copy link
Contributor

AVVS commented Jul 2, 2016

I'd say bench is too synthetic compared to real-world. 4MiB buffer & 100 ops in the queue could be there infinitely if app is under no load. So you absolutely must add a timed flush. Imagine what would happen if the command is stuck in the queue for minutes because the buffer doesn't reach 100th command?

  • in most real-world scenarios latency is lower than 1ms, often around 0.2-0.3ms. Given that - I'd look at this change as a highly controversial.
  • There is also a rewritten JS parser, which greatly increases performance in the low latency environments, try running bench with it
  • how are manual pipelines handled?

Given all those points auto-pipeline could be cool, but it's way more complex than this as it seems to me

@Salakar
Copy link
Collaborator Author

Salakar commented Jul 2, 2016

  • It's per event loop tick so it won't just hang, buffer will always get written at the start of the next tick, even if it's not reached those limits
  • 100 commands in a single 0.2-0.3ms trip is still better than 100 round trips totalling at 20-30ms, it adds up as the benchmarks show.
  • Am aware of the new parser also as I re-wrote it with BridgeAR, and wrote the new key slot calculator =]
  • Manual pipelines go through the same logic and will also get buffered, there's no harm in this, it's per event loop tick as I said, so the manual pipeline and any other commands run in the same tick will get sent together.

@AVVS
Copy link
Contributor

AVVS commented Jul 2, 2016

@Salakar cool, missed sense of this line initially:
https://github.com/luin/ioredis/pull/343/files#diff-bbfcd42271084cf70271e042b7b4a16eR44

Am aware of the new parser also as I re-wrote it with BridgeAR, and wrote the new key slot calculator =]

awesome 👍

Manual pipelines go through the same logic and will also get buffered, there's no harm in this, it's per event loop as I said, so the manual pipeline and any other commands run in the same event loop will get sent together.

Thanks for clarification, all makes much more sense now!

@Salakar
Copy link
Collaborator Author

Salakar commented Jul 2, 2016

@AVVS no worries, if we need the manual pipelines to be sent in their own .write() then we can just use the forceWrite arg I added to the proxy stream.write() function, shouldn't be needed though

@AVVS
Copy link
Contributor

AVVS commented Jul 2, 2016

@Salakar 1-2 ticks shouldn't really matter for a manual pipeline IMO, but force write is handy for such cases

@Salakar
Copy link
Collaborator Author

Salakar commented Jul 2, 2016

Performance halves on remote servers, wut. Investigating...

@luin
Copy link
Collaborator

luin commented Jul 3, 2016

Great work! Writable stream offers the cork() method to buffer the data in the memory. In our case, I think it should do the trick by calling cork() on the initial write and uncork() when the queue size reaches the limitation or in the next event loop.

@Salakar
Copy link
Collaborator Author

Salakar commented Jul 4, 2016

@luin It's a lot more performant without cork and uncork, bench with cork/uncork:

image

Still investigating the remote issue, seems to just make it worse... might make an issue on node repo.

@pavel
Copy link

pavel commented Aug 22, 2016

@Salakar Can you just disable this by default, and add an option that enables auto-pipelining? This will allow those who need this for local connections to get the benefit from your PR?

@Salakar
Copy link
Collaborator Author

Salakar commented Aug 24, 2016

@pavel I'm not sure if it's worth it just for local connections - would be good if I could get responses on the nodejs issues I created re this but had nothing from anyone there, it's weird how this drastically drops if remote connection.

I have a feeling maybe it's using unix socket locally maybe, dunno, need to investigate, just not got the time at the moment. Soon ™️

@lo1tuma
Copy link

lo1tuma commented Aug 30, 2016

Wouldn’t you get a lot of cross-slots errors if you do multi-key ops against a redis cluster?

@Salakar
Copy link
Collaborator Author

Salakar commented Aug 30, 2016

@lo1tuma I did this at the raw connection level, so not an issue, each individual server connection has it's own pipeline.

@Salakar
Copy link
Collaborator Author

Salakar commented Oct 25, 2016

Closing my PR until i can re-look at this at a later date.

@Salakar Salakar closed this Oct 25, 2016
@Salakar
Copy link
Collaborator Author

Salakar commented Jan 16, 2017

@luin Woo, looks like this is fixed in node now - nodejs/node#5095

Might be worth re-opening and checking the benchmarks

@luin luin reopened this Jan 16, 2017
@stale
Copy link

stale bot commented Oct 23, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 7 days if no further activity occurs, but feel free to re-open a closed issue if needed.

@stale stale bot added the wontfix label Oct 23, 2017
@stale stale bot closed this Oct 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants