You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
BenchmarkParallelCacheGet, BenchmarkCacheGetWithBuf, BenchmarkCacheGetFn create 256MiB caches. Then they add as many elements as b.N. The problem with this setup is that b.N is very likely going to overflow 256MiB in some iteration, which means that elements will be evacuated, and the Get operations will short-circuit with ErrNotFound (miss), rather than incurring in the full cost of the hit, which is what we're trying to measure.
Proof:
BenchmarkCacheGet
cache_test.go:681: b.N: 1; hit rate: 1.000000
cache_test.go:681: b.N: 100; hit rate: 1.000000
cache_test.go:681: b.N: 10000; hit rate: 1.000000
cache_test.go:681: b.N: 1000000; hit rate: 1.000000
cache_test.go:681: b.N: 2904801; hit rate: 0.962555 <====
BenchmarkParallelCacheGet
cache_test.go:725: b.N: 1; hit rate: 1.000000
cache_test.go:725: b.N: 100; hit rate: 1.000000
cache_test.go:725: b.N: 10000; hit rate: 1.000000
cache_test.go:725: b.N: 1000000; hit rate: 1.000000
cache_test.go:725: b.N: 21083162; hit rate: 0.000000 <====
cache_test.go:725: b.N: 62143274; hit rate: 0.000000
The hard drop from 1 to 0 in BenchmarkParallelCacheGet occurs because:
the benchmark populates the caches sequentially.
with so many keys, only the most recent 256MiB worth of keys are retained.
the benchmarks start retrieving sequentially from 0, querying for keys that have been evacuated.
because go partitions the iteration count, and all parallel threads start from 0, none of them ever reaches the range whose cache entries have been retained.
In a nutshell, the benchmark numbers are not reliable. A solution to fix this would be to calculate the max entries we can add, add that many, and then modulo iterate over them when fetching.
The text was updated successfully, but these errors were encountered:
BenchmarkParallelCacheGet, BenchmarkCacheGetWithBuf, BenchmarkCacheGetFn create 256MiB caches. Then they add as many elements as b.N. The problem with this setup is that b.N is very likely going to overflow 256MiB in some iteration, which means that elements will be evacuated, and the
Get
operations will short-circuit withErrNotFound
(miss), rather than incurring in the full cost of the hit, which is what we're trying to measure.Proof:
The hard drop from 1 to 0 in
BenchmarkParallelCacheGet
occurs because:In a nutshell, the benchmark numbers are not reliable. A solution to fix this would be to calculate the max entries we can add, add that many, and then modulo iterate over them when fetching.
The text was updated successfully, but these errors were encountered: