Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GETPXT, MGETPXT (Get with millisecond expiration) commands #1455

Open
wants to merge 3 commits into
base: unstable
Choose a base branch
from

Conversation

arcivanov
Copy link

@arcivanov arcivanov commented Dec 18, 2024

Distributed cache synchronization is an eternal problem. Luckily, currently (2024), cloud deployed services provide high quality time synchronization for deployed resources, allowing precision timing coordination with sub-millisecond precision. This command allows to take advantage of this fact to implement simple distributed cache synchronization.

Using absolute Unix millisecond timestamps for synchronization allows machine-local caches that feed from a shared Valkey cache in the cluster to expire the local entries within 1ms of each other, assuming the sub-ms clock synchronization, providing no-cost distributed cache synchronization.

Unfortunately, Valkey does not have commands that allow to conveniently (and atomically!) retrieve a value and the accompanying expiration timestamp in a single command. The workaround exists allowing to use GET followed by PEXPIRETIME, pipelined. This naturally involves additional processing including checking for the key's existence twice.

This patch introduces a new command GETPXT (GET + PeXpireTime), which behaves as documented below and elsewhere in the commit.


Returns null if the value not found or expired.
Returns an array of length 2 as [<string key value>, <integer expiration>]. If expiration is not set on the key, expiration returned is -1.

Added tests to cover

Returns null if the value not found or expired.
Returns an array of length 2 as [<string key value>, <integer expiration>].
If expiration is not set on the key, expiration returned is -1.

Added tests to cover

Signed-off-by: Arcadiy Ivanov <arcadiy@ivanov.biz>
@madolson
Copy link
Member

Is your application bottlenecked on throughput? The use case generally makes sense to me, but I would rather avoiding adding a command if we can optimize the existing codepath. I will say that fetching the expire time is getting a lot more efficient with the new cache efficient dictionary we are releasing with 8.1 (hopefully), so maybe it's not as critical.

Another note, there is some overlap in naming with GETEX, which is GET + Set expiry.

@arcivanov
Copy link
Author

Two commands, apart from the overhead of processing two commands and potentially non-atomic nature of the retrieval (I'm not sure here as far as architecture - are pipelined commands atomic?), will end up calling lookupKeyReadOrReply (https://github.com/valkey-io/valkey/blob/unstable/src/t_string.c#L381C14-L381C34) and lookupKeyReadWithFlags (https://github.com/valkey-io/valkey/blob/unstable/src/expire.c#L726C9-L726C31) needlessly. There is only one lookup in the proposed patch.

I am not married to a command name and considered adding an option to the GET command, although this being the most frequently used command it would introduce parsing overhead (however small). Looking for further guidance.

@madolson
Copy link
Member

are pipelined commands atomic?

Pipeline commands are not atomic, but you can send them in a multi-exec which does make them a transaction.

There is only one lookup in the proposed patch.

Understood. However, on modern hardware the bottleneck is mostly on DRAM latency if you have a miss in the CPU cache, so practically the double miss won't matter much. Give me one sec, I'll come up with a procedure you can use to quickly test the performance.

@arcivanov
Copy link
Author

Pipeline commands are not atomic, but you can send them in a multi-exec which does make them a transaction.

Yep, already using pipeline'd MULTI/GET/PEXPIRETIME/EXEC. The EXEC is marked "slow" in documentation.
Additionally, now that I think of it, there should be a WATCH for the key in question in case the value OR the expiration time are modified while it's being retrieved.

The above scheme, however, is entirely avoided if an atomic command is used 😄

@zuiderkwast
Copy link
Contributor

zuiderkwast commented Dec 18, 2024

The EXEC is marked "slow" in documentation.

EXEC itself is not slow. It's basically a no-op.

MULTI-EXEC works in a very simple way. The commands are queued up and when EXEC is called, all the commands are executed together and this makes it atomic.

So, you could say EXEC is slow, because it executes all the commands that have been queued up. If you have actually slow commands in the transaction, they will make EXEC slow.

The "slow" flag in the documentation is mainly confusing and I wish we could just delete this in the documentation. Currently, every command that isn't explicitly marked as "fast" are marked as "slow".

Additionally, now that I think of it, there should be a WATCH for the key in question in case the value OR the expiration time are modified while it's being retrieved.

I don't get this. MULTI/GET/PEXPIRETIME/EXEC is atomic, just like your proposed GETPXT. No need for WATCH, or am I missing something?

@arcivanov
Copy link
Author

I don't get this. MULTI/GET/PEXPIRETIME/EXEC is atomic, just like your proposed GETPXT. No need for WATCH, or am I missing something?

Ah, nevermind, I read into the documentation - WATCH is only needed when used before the transaction is started with MULTI. Withdrawn.

@arcivanov arcivanov changed the title Add GETPXT (Get with millisecond expiration) command Add GETPXT, MGETPXT (Get with millisecond expiration) command Dec 20, 2024
@arcivanov arcivanov changed the title Add GETPXT, MGETPXT (Get with millisecond expiration) command Add GETPXT, MGETPXT (Get with millisecond expiration) commands Dec 20, 2024
Remove the "overload" as non-precision timestamp isn't useful in the intended context

Signed-off-by: Arcadiy Ivanov <arcadiy@ivanov.biz>
@zuiderkwast
Copy link
Contributor

Since MULTI-EXEC can do this atomically, we will most likely not accept this new command without another good reason, something that MULTI-EXEC doesn't solve. Is the performance of MULTI-EXEC not good enough?

@arcivanov
Copy link
Author

@zuiderkwast well, there are atomic sets (SET key value PXAT tsms and GETEX), all of which could be accomplished with MULTI-EXEC, and yet these commands are in the system. It seems logical that there are atomic counterparts on the GET side. Additionally, MGET can be accomplished by pipelining GETs inside MULTI-EXEC, yet the command is still there along with numerous others that could be accomplished via pipelined transactions or scripting.

Is the performance of MULTI-EXEC not good enough?

I will conduct the formal performance tests but given that a MULTI-EXEC will result in more parsing, more wire traffic and two lookups instead of one, I doubt the performance will be identical. Our goal is to fit as much performance as possible into the smallest machine possible to save costs.

@zuiderkwast
Copy link
Contributor

zuiderkwast commented Dec 20, 2024

@arcivanov Thanks, so I can summarize you points like this:

  1. Performance
  2. Symmetry with other commands (SET PXAT, GETEX)
  3. Convenience

In the past when there was a BDFL governance, commands got added based on personal taste. In the last years, we have been more restrictive in adding new commands, especially when the same effect can already be achieved. There's no absolute rule about this though.

I'm looking forward to see your performance results. To me, it seems like the most convincing reason to add this. My guess is that a transaction is slower, but not many times slower.

@zuiderkwast
Copy link
Contributor

zuiderkwast commented Dec 20, 2024

Are you querying multiple keys at the same time?

If yes, then there's an alternative to consider. MGET can be used to get multiple keys. If we can allow PEXPIRETIME to return the timestamp for multiple keys, then you could fetch this info for multiple keys like

MULTI
MGET key1 key2 key3
PEXPIRETIME key1 key2 key3
EXEC

@arcivanov
Copy link
Author

arcivanov commented Dec 24, 2024

These are very preliminary and I'll push the code once I clean it up a little bit but here is the data:

Average latency for MULTI-GET-PEXPIRE-EXEC is 20% slower than GETPXT.

Again, this is very preliminary and I need to exclude a few potential sources of error in the bench (namely GET key:__rand_int__\r\nPEXPIRETIME key:__rand_int__ accessing different records).

$ src/valkey-benchmark -t set_pxat,getpxt,getpxt_simulated -n 3000000 -r 3000000 -c 100
====== SET w/ PXAT ======                                                     
  3000000 requests completed in 29.06 seconds
  100 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 3600 1 300 100 60 10000
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.111 milliseconds (cumulative count 2)
50.000% <= 0.471 milliseconds (cumulative count 1557625)
75.000% <= 0.511 milliseconds (cumulative count 2260843)
87.500% <= 0.655 milliseconds (cumulative count 2634115)
93.750% <= 0.887 milliseconds (cumulative count 2813469)
96.875% <= 1.127 milliseconds (cumulative count 2907931)
98.438% <= 1.391 milliseconds (cumulative count 2953950)
99.219% <= 1.711 milliseconds (cumulative count 2976925)
99.609% <= 2.055 milliseconds (cumulative count 2988424)
99.805% <= 2.247 milliseconds (cumulative count 2994159)
99.902% <= 2.527 milliseconds (cumulative count 2997104)
99.951% <= 2.711 milliseconds (cumulative count 2998582)
99.976% <= 2.879 milliseconds (cumulative count 2999278)
99.988% <= 3.079 milliseconds (cumulative count 2999641)
99.994% <= 3.343 milliseconds (cumulative count 2999818)
99.997% <= 15.079 milliseconds (cumulative count 2999909)
99.998% <= 16.055 milliseconds (cumulative count 2999955)
99.999% <= 16.655 milliseconds (cumulative count 2999978)
100.000% <= 16.959 milliseconds (cumulative count 2999989)
100.000% <= 17.135 milliseconds (cumulative count 2999995)
100.000% <= 17.215 milliseconds (cumulative count 2999998)
100.000% <= 17.263 milliseconds (cumulative count 2999999)
100.000% <= 17.327 milliseconds (cumulative count 3000000)
100.000% <= 17.327 milliseconds (cumulative count 3000000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
0.001% <= 0.207 milliseconds (cumulative count 18)
0.035% <= 0.303 milliseconds (cumulative count 1062)
0.711% <= 0.407 milliseconds (cumulative count 21343)
73.339% <= 0.503 milliseconds (cumulative count 2200176)
85.490% <= 0.607 milliseconds (cumulative count 2564694)
89.396% <= 0.703 milliseconds (cumulative count 2681874)
92.088% <= 0.807 milliseconds (cumulative count 2762641)
94.091% <= 0.903 milliseconds (cumulative count 2822718)
95.691% <= 1.007 milliseconds (cumulative count 2870722)
96.718% <= 1.103 milliseconds (cumulative count 2901555)
97.539% <= 1.207 milliseconds (cumulative count 2926177)
98.088% <= 1.303 milliseconds (cumulative count 2942649)
98.521% <= 1.407 milliseconds (cumulative count 2955644)
98.806% <= 1.503 milliseconds (cumulative count 2964186)
99.043% <= 1.607 milliseconds (cumulative count 2971305)
99.218% <= 1.703 milliseconds (cumulative count 2976552)
99.364% <= 1.807 milliseconds (cumulative count 2980910)
99.465% <= 1.903 milliseconds (cumulative count 2983938)
99.549% <= 2.007 milliseconds (cumulative count 2986477)
99.694% <= 2.103 milliseconds (cumulative count 2990817)
99.989% <= 3.103 milliseconds (cumulative count 2999658)
99.997% <= 4.103 milliseconds (cumulative count 2999899)
99.997% <= 5.103 milliseconds (cumulative count 2999900)
99.997% <= 15.103 milliseconds (cumulative count 2999909)
99.999% <= 16.103 milliseconds (cumulative count 2999957)
100.000% <= 17.103 milliseconds (cumulative count 2999993)
100.000% <= 18.111 milliseconds (cumulative count 3000000)

Summary:
  throughput summary: 103234.69 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.544     0.104     0.471     0.959     1.591    17.327
====== GETPXT ======                                                     
  3000000 requests completed in 28.71 seconds
  100 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 3600 1 300 100 60 10000
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.111 milliseconds (cumulative count 2)
50.000% <= 0.479 milliseconds (cumulative count 1815098)
75.000% <= 0.495 milliseconds (cumulative count 2323502)
87.500% <= 0.527 milliseconds (cumulative count 2629031)
93.750% <= 0.639 milliseconds (cumulative count 2812974)
96.875% <= 0.807 milliseconds (cumulative count 2906398)
98.438% <= 0.983 milliseconds (cumulative count 2954330)
99.219% <= 1.271 milliseconds (cumulative count 2976796)
99.609% <= 1.927 milliseconds (cumulative count 2988347)
99.805% <= 2.151 milliseconds (cumulative count 2994232)
99.902% <= 2.479 milliseconds (cumulative count 2997082)
99.951% <= 2.599 milliseconds (cumulative count 2998588)
99.976% <= 2.711 milliseconds (cumulative count 2999283)
99.988% <= 2.871 milliseconds (cumulative count 2999641)
99.994% <= 3.103 milliseconds (cumulative count 2999817)
99.997% <= 3.311 milliseconds (cumulative count 2999909)
99.998% <= 3.703 milliseconds (cumulative count 2999955)
99.999% <= 3.959 milliseconds (cumulative count 2999979)
100.000% <= 4.047 milliseconds (cumulative count 2999989)
100.000% <= 4.255 milliseconds (cumulative count 2999995)
100.000% <= 4.303 milliseconds (cumulative count 2999998)
100.000% <= 4.607 milliseconds (cumulative count 2999999)
100.000% <= 4.631 milliseconds (cumulative count 3000000)
100.000% <= 4.631 milliseconds (cumulative count 3000000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
0.001% <= 0.207 milliseconds (cumulative count 21)
0.031% <= 0.303 milliseconds (cumulative count 941)
0.619% <= 0.407 milliseconds (cumulative count 18574)
81.712% <= 0.503 milliseconds (cumulative count 2451346)
92.731% <= 0.607 milliseconds (cumulative count 2781945)
95.222% <= 0.703 milliseconds (cumulative count 2856660)
96.880% <= 0.807 milliseconds (cumulative count 2906398)
97.931% <= 0.903 milliseconds (cumulative count 2937935)
98.582% <= 1.007 milliseconds (cumulative count 2957449)
98.891% <= 1.103 milliseconds (cumulative count 2966733)
99.128% <= 1.207 milliseconds (cumulative count 2973827)
99.265% <= 1.303 milliseconds (cumulative count 2977947)
99.370% <= 1.407 milliseconds (cumulative count 2981106)
99.438% <= 1.503 milliseconds (cumulative count 2983153)
99.492% <= 1.607 milliseconds (cumulative count 2984764)
99.534% <= 1.703 milliseconds (cumulative count 2986029)
99.572% <= 1.807 milliseconds (cumulative count 2987145)
99.604% <= 1.903 milliseconds (cumulative count 2988106)
99.640% <= 2.007 milliseconds (cumulative count 2989204)
99.768% <= 2.103 milliseconds (cumulative count 2993028)
99.994% <= 3.103 milliseconds (cumulative count 2999817)
100.000% <= 4.103 milliseconds (cumulative count 2999992)
100.000% <= 5.103 milliseconds (cumulative count 3000000)

Summary:
  throughput summary: 104493.21 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.507     0.104     0.479     0.695     1.151     4.631
MULTI
GET key:__rand_int__
PEXPIRETIME key:__rand_int__
EXEC
====== GETPXT Simulated ======                                                     
  3000000 requests completed in 30.63 seconds
  100 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 3600 1 300 100 60 10000
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.231 milliseconds (cumulative count 2)
50.000% <= 0.535 milliseconds (cumulative count 1501247)
75.000% <= 0.655 milliseconds (cumulative count 2254474)
87.500% <= 0.807 milliseconds (cumulative count 2633915)
93.750% <= 0.943 milliseconds (cumulative count 2814682)
96.875% <= 1.095 milliseconds (cumulative count 2906403)
98.438% <= 1.295 milliseconds (cumulative count 2953727)
99.219% <= 1.607 milliseconds (cumulative count 2976665)
99.609% <= 2.111 milliseconds (cumulative count 2988500)
99.805% <= 2.487 milliseconds (cumulative count 2994164)
99.902% <= 2.831 milliseconds (cumulative count 2997092)
99.951% <= 3.103 milliseconds (cumulative count 2998553)
99.976% <= 3.311 milliseconds (cumulative count 2999278)
99.988% <= 3.495 milliseconds (cumulative count 2999647)
99.994% <= 3.671 milliseconds (cumulative count 2999819)
99.997% <= 16.879 milliseconds (cumulative count 2999909)
99.998% <= 17.583 milliseconds (cumulative count 2999955)
99.999% <= 17.887 milliseconds (cumulative count 2999978)
100.000% <= 17.999 milliseconds (cumulative count 2999989)
100.000% <= 18.079 milliseconds (cumulative count 2999996)
100.000% <= 18.111 milliseconds (cumulative count 2999998)
100.000% <= 18.127 milliseconds (cumulative count 2999999)
100.000% <= 18.143 milliseconds (cumulative count 3000000)
100.000% <= 18.143 milliseconds (cumulative count 3000000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
0.016% <= 0.303 milliseconds (cumulative count 481)
0.573% <= 0.407 milliseconds (cumulative count 17186)
35.339% <= 0.503 milliseconds (cumulative count 1060175)
68.272% <= 0.607 milliseconds (cumulative count 2048150)
80.013% <= 0.703 milliseconds (cumulative count 2400388)
87.797% <= 0.807 milliseconds (cumulative count 2633915)
92.577% <= 0.903 milliseconds (cumulative count 2777308)
95.385% <= 1.007 milliseconds (cumulative count 2861545)
96.987% <= 1.103 milliseconds (cumulative count 2909623)
97.999% <= 1.207 milliseconds (cumulative count 2939961)
98.491% <= 1.303 milliseconds (cumulative count 2954722)
98.836% <= 1.407 milliseconds (cumulative count 2965070)
99.061% <= 1.503 milliseconds (cumulative count 2971835)
99.222% <= 1.607 milliseconds (cumulative count 2976665)
99.328% <= 1.703 milliseconds (cumulative count 2979852)
99.405% <= 1.807 milliseconds (cumulative count 2982162)
99.461% <= 1.903 milliseconds (cumulative count 2983820)
99.509% <= 2.007 milliseconds (cumulative count 2985264)
99.607% <= 2.103 milliseconds (cumulative count 2988213)
99.952% <= 3.103 milliseconds (cumulative count 2998553)
99.997% <= 4.103 milliseconds (cumulative count 2999896)
99.997% <= 5.103 milliseconds (cumulative count 2999900)
99.997% <= 17.103 milliseconds (cumulative count 2999917)
100.000% <= 18.111 milliseconds (cumulative count 2999998)
100.000% <= 19.103 milliseconds (cumulative count 3000000)

Summary:
  throughput summary: 97930.40 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.612     0.224     0.535     0.991     1.479    18.143

Signed-off-by: Arcadiy Ivanov <arcadiy@ivanov.biz>
@arcivanov
Copy link
Author

I have pushed the changes to the benchmark and here are the results done over the 1GbE network of Framework 16 client over LAN with valkey-server running on HP DL360G9 2xE5-2695v4 32-core 256GB. Built with regular make call with no march optimizations.

$ src/valkey-benchmark -t set_pxat,getpxt,getpxt_simulated -n 10000000 -r 3000000 -c 600 -h gh-x86-64-0002.local
====== SET w/ PXAT ======                                                     
  10000000 requests completed in 93.39 seconds
  600 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 3600 1 300 100 60 10000
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 1.535 milliseconds (cumulative count 1)
50.000% <= 4.767 milliseconds (cumulative count 5009571)
75.000% <= 5.415 milliseconds (cumulative count 7517549)
87.500% <= 6.071 milliseconds (cumulative count 8760775)
93.750% <= 6.615 milliseconds (cumulative count 9379676)
96.875% <= 7.399 milliseconds (cumulative count 9688551)
98.438% <= 8.287 milliseconds (cumulative count 9844120)
99.219% <= 8.943 milliseconds (cumulative count 9921979)
99.609% <= 9.823 milliseconds (cumulative count 9960975)
99.805% <= 10.903 milliseconds (cumulative count 9980493)
99.902% <= 12.199 milliseconds (cumulative count 9990247)
99.951% <= 13.655 milliseconds (cumulative count 9995136)
99.976% <= 14.839 milliseconds (cumulative count 9997562)
99.988% <= 16.415 milliseconds (cumulative count 9998785)
99.994% <= 18.543 milliseconds (cumulative count 9999390)
99.997% <= 20.959 milliseconds (cumulative count 9999698)
99.998% <= 23.695 milliseconds (cumulative count 9999849)
99.999% <= 25.359 milliseconds (cumulative count 9999924)
100.000% <= 31.023 milliseconds (cumulative count 9999962)
100.000% <= 31.471 milliseconds (cumulative count 9999981)
100.000% <= 33.215 milliseconds (cumulative count 9999991)
100.000% <= 33.407 milliseconds (cumulative count 9999997)
100.000% <= 33.471 milliseconds (cumulative count 9999998)
100.000% <= 212.479 milliseconds (cumulative count 10000000)
100.000% <= 212.479 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
0.000% <= 1.607 milliseconds (cumulative count 10)
0.000% <= 1.703 milliseconds (cumulative count 30)
0.001% <= 1.807 milliseconds (cumulative count 122)
0.005% <= 1.903 milliseconds (cumulative count 530)
0.023% <= 2.007 milliseconds (cumulative count 2304)
0.073% <= 2.103 milliseconds (cumulative count 7280)
6.311% <= 3.103 milliseconds (cumulative count 631111)
20.516% <= 4.103 milliseconds (cumulative count 2051603)
66.060% <= 5.103 milliseconds (cumulative count 6605969)
88.080% <= 6.103 milliseconds (cumulative count 8808027)
96.145% <= 7.103 milliseconds (cumulative count 9614532)
98.132% <= 8.103 milliseconds (cumulative count 9813161)
99.327% <= 9.103 milliseconds (cumulative count 9932714)
99.678% <= 10.103 milliseconds (cumulative count 9967775)
99.827% <= 11.103 milliseconds (cumulative count 9982710)
99.898% <= 12.103 milliseconds (cumulative count 9989794)
99.936% <= 13.103 milliseconds (cumulative count 9993604)
99.962% <= 14.103 milliseconds (cumulative count 9996241)
99.977% <= 15.103 milliseconds (cumulative count 9997700)
99.986% <= 16.103 milliseconds (cumulative count 9998594)
99.990% <= 17.103 milliseconds (cumulative count 9998969)
99.992% <= 18.111 milliseconds (cumulative count 9999244)
99.995% <= 19.103 milliseconds (cumulative count 9999469)
99.996% <= 20.111 milliseconds (cumulative count 9999582)
99.997% <= 21.103 milliseconds (cumulative count 9999712)
99.998% <= 22.111 milliseconds (cumulative count 9999770)
99.998% <= 23.103 milliseconds (cumulative count 9999820)
99.999% <= 24.111 milliseconds (cumulative count 9999867)
99.999% <= 25.103 milliseconds (cumulative count 9999906)
99.999% <= 26.111 milliseconds (cumulative count 9999947)
100.000% <= 28.111 milliseconds (cumulative count 9999952)
100.000% <= 29.103 milliseconds (cumulative count 9999955)
100.000% <= 31.103 milliseconds (cumulative count 9999964)
100.000% <= 32.111 milliseconds (cumulative count 9999984)
100.000% <= 33.119 milliseconds (cumulative count 9999987)
100.000% <= 34.111 milliseconds (cumulative count 9999998)
100.000% <= 213.119 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 107083.58 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        4.869     1.528     4.767     6.815     8.703   212.479
====== GETPXT ======                                                     
  10000000 requests completed in 87.80 seconds
  600 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 3600 1 300 100 60 10000
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.743 milliseconds (cumulative count 2)
50.000% <= 3.551 milliseconds (cumulative count 5020352)
75.000% <= 4.439 milliseconds (cumulative count 7510858)
87.500% <= 4.975 milliseconds (cumulative count 8755434)
93.750% <= 5.583 milliseconds (cumulative count 9376140)
96.875% <= 6.055 milliseconds (cumulative count 9690462)
98.438% <= 6.519 milliseconds (cumulative count 9844744)
99.219% <= 7.247 milliseconds (cumulative count 9922143)
99.609% <= 7.855 milliseconds (cumulative count 9961079)
99.805% <= 8.303 milliseconds (cumulative count 9980558)
99.902% <= 8.863 milliseconds (cumulative count 9990261)
99.951% <= 9.783 milliseconds (cumulative count 9995134)
99.976% <= 11.071 milliseconds (cumulative count 9997563)
99.988% <= 11.671 milliseconds (cumulative count 9998781)
99.994% <= 14.071 milliseconds (cumulative count 9999390)
99.997% <= 16.207 milliseconds (cumulative count 9999695)
99.998% <= 17.743 milliseconds (cumulative count 9999848)
99.999% <= 18.671 milliseconds (cumulative count 9999925)
100.000% <= 19.439 milliseconds (cumulative count 9999963)
100.000% <= 21.711 milliseconds (cumulative count 9999981)
100.000% <= 22.207 milliseconds (cumulative count 9999991)
100.000% <= 22.383 milliseconds (cumulative count 9999996)
100.000% <= 22.447 milliseconds (cumulative count 9999998)
100.000% <= 22.479 milliseconds (cumulative count 10000000)
100.000% <= 22.479 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
0.000% <= 0.807 milliseconds (cumulative count 9)
0.000% <= 0.903 milliseconds (cumulative count 27)
0.000% <= 1.007 milliseconds (cumulative count 38)
0.001% <= 1.103 milliseconds (cumulative count 51)
0.001% <= 1.207 milliseconds (cumulative count 68)
0.001% <= 1.303 milliseconds (cumulative count 85)
0.001% <= 1.407 milliseconds (cumulative count 102)
0.001% <= 1.503 milliseconds (cumulative count 122)
0.001% <= 1.607 milliseconds (cumulative count 142)
0.002% <= 1.703 milliseconds (cumulative count 204)
0.007% <= 1.807 milliseconds (cumulative count 682)
0.028% <= 1.903 milliseconds (cumulative count 2809)
0.103% <= 2.007 milliseconds (cumulative count 10282)
0.295% <= 2.103 milliseconds (cumulative count 29533)
35.943% <= 3.103 milliseconds (cumulative count 3594272)
64.626% <= 4.103 milliseconds (cumulative count 6462603)
89.190% <= 5.103 milliseconds (cumulative count 8919049)
97.138% <= 6.103 milliseconds (cumulative count 9713786)
99.112% <= 7.103 milliseconds (cumulative count 9911210)
99.730% <= 8.103 milliseconds (cumulative count 9973007)
99.921% <= 9.103 milliseconds (cumulative count 9992114)
99.959% <= 10.103 milliseconds (cumulative count 9995875)
99.977% <= 11.103 milliseconds (cumulative count 9997698)
99.989% <= 12.103 milliseconds (cumulative count 9998895)
99.992% <= 13.103 milliseconds (cumulative count 9999171)
99.994% <= 14.103 milliseconds (cumulative count 9999392)
99.996% <= 15.103 milliseconds (cumulative count 9999550)
99.997% <= 16.103 milliseconds (cumulative count 9999682)
99.998% <= 17.103 milliseconds (cumulative count 9999796)
99.999% <= 18.111 milliseconds (cumulative count 9999882)
99.999% <= 19.103 milliseconds (cumulative count 9999946)
100.000% <= 20.111 milliseconds (cumulative count 9999975)
100.000% <= 22.111 milliseconds (cumulative count 9999989)
100.000% <= 23.103 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 113896.51 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        3.758     0.736     3.551     5.751     6.975    22.479
====== GETPXT Simulated ======                                                     
  10000000 requests completed in 97.23 seconds
  600 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 3600 1 300 100 60 10000
  host configuration "appendonly": no
  multi-thread: no

Latency by percentile distribution:
0.000% <= 1.671 milliseconds (cumulative count 1)
50.000% <= 4.783 milliseconds (cumulative count 5025523)
75.000% <= 5.495 milliseconds (cumulative count 7515973)
87.500% <= 6.255 milliseconds (cumulative count 8751646)
93.750% <= 6.791 milliseconds (cumulative count 9378528)
96.875% <= 7.319 milliseconds (cumulative count 9688869)
98.438% <= 7.943 milliseconds (cumulative count 9844871)
99.219% <= 8.455 milliseconds (cumulative count 9922561)
99.609% <= 8.935 milliseconds (cumulative count 9961099)
99.805% <= 9.463 milliseconds (cumulative count 9980635)
99.902% <= 10.103 milliseconds (cumulative count 9990304)
99.951% <= 11.087 milliseconds (cumulative count 9995145)
99.976% <= 12.871 milliseconds (cumulative count 9997561)
99.988% <= 14.103 milliseconds (cumulative count 9998786)
99.994% <= 19.279 milliseconds (cumulative count 9999392)
99.997% <= 34.879 milliseconds (cumulative count 9999695)
99.998% <= 37.951 milliseconds (cumulative count 9999849)
99.999% <= 39.935 milliseconds (cumulative count 9999926)
100.000% <= 42.015 milliseconds (cumulative count 9999963)
100.000% <= 43.231 milliseconds (cumulative count 9999981)
100.000% <= 43.391 milliseconds (cumulative count 9999991)
100.000% <= 43.519 milliseconds (cumulative count 9999997)
100.000% <= 43.551 milliseconds (cumulative count 9999999)
100.000% <= 43.647 milliseconds (cumulative count 10000000)
100.000% <= 43.647 milliseconds (cumulative count 10000000)

Cumulative distribution of latencies:
0.000% <= 0.103 milliseconds (cumulative count 0)
0.000% <= 1.703 milliseconds (cumulative count 2)
0.000% <= 1.807 milliseconds (cumulative count 5)
0.001% <= 1.903 milliseconds (cumulative count 55)
0.003% <= 2.007 milliseconds (cumulative count 315)
0.014% <= 2.103 milliseconds (cumulative count 1439)
6.588% <= 3.103 milliseconds (cumulative count 658806)
25.425% <= 4.103 milliseconds (cumulative count 2542508)
64.305% <= 5.103 milliseconds (cumulative count 6430514)
85.388% <= 6.103 milliseconds (cumulative count 8538781)
95.970% <= 7.103 milliseconds (cumulative count 9597041)
98.731% <= 8.103 milliseconds (cumulative count 9873097)
99.690% <= 9.103 milliseconds (cumulative count 9968958)
99.903% <= 10.103 milliseconds (cumulative count 9990304)
99.952% <= 11.103 milliseconds (cumulative count 9995201)
99.971% <= 12.103 milliseconds (cumulative count 9997052)
99.978% <= 13.103 milliseconds (cumulative count 9997752)
99.988% <= 14.103 milliseconds (cumulative count 9998786)
99.992% <= 15.103 milliseconds (cumulative count 9999204)
99.993% <= 16.103 milliseconds (cumulative count 9999346)
99.994% <= 17.103 milliseconds (cumulative count 9999357)
99.994% <= 18.111 milliseconds (cumulative count 9999363)
99.994% <= 19.103 milliseconds (cumulative count 9999388)
99.994% <= 20.111 milliseconds (cumulative count 9999400)
99.994% <= 30.111 milliseconds (cumulative count 9999412)
99.995% <= 31.103 milliseconds (cumulative count 9999504)
99.996% <= 32.111 milliseconds (cumulative count 9999569)
99.996% <= 33.119 milliseconds (cumulative count 9999619)
99.997% <= 34.111 milliseconds (cumulative count 9999657)
99.997% <= 35.103 milliseconds (cumulative count 9999701)
99.998% <= 36.127 milliseconds (cumulative count 9999769)
99.998% <= 37.119 milliseconds (cumulative count 9999813)
99.999% <= 38.111 milliseconds (cumulative count 9999854)
99.999% <= 39.103 milliseconds (cumulative count 9999892)
99.999% <= 40.127 milliseconds (cumulative count 9999933)
99.999% <= 41.119 milliseconds (cumulative count 9999947)
100.000% <= 42.111 milliseconds (cumulative count 9999964)
100.000% <= 43.103 milliseconds (cumulative count 9999978)
100.000% <= 44.127 milliseconds (cumulative count 10000000)

Summary:
  throughput summary: 102846.80 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        4.853     1.664     4.783     6.951     8.287    43.647

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants