Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adds LRU cache with lazy eviction #159

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

behzadnouri
Copy link

The commit implements a variant of LRU cache with lazy eviction:
(added in submodule lru::lazy::LruCache)

  • Each entry maintains an associated ordinal value representing when the entry was last accessed.
  • The cache is allowed to grow up to 2 times specified capacity with no evictions, at which point, the excess entries are evicted based on LRU policy resulting in an amortized O(1) performance.

In many use cases which can allow the cache to store 2 times capacity and can tolerate amortized nature of performance, this results in better average performance as shown by the added benchmarks.

Additionally, with the existing implementation, .get requires a mutable reference &mut self. In a multi-threaded setting, this requires an exclusive write-lock on the cache even on the read path, which can exacerbate lock contentions.
With lazy eviction, the ordinal values can be updated using atomic operations, allowing shared lock for lookups.

Results from added benchmarks on my machine (can be reproduced by cargo bench):

test bench_get      ... bench:      32,066 ns/iter (+/- 7,529)
test bench_get_lazy ... bench:      24,827 ns/iter (+/- 1,255)
test bench_put      ... bench:      74,826 ns/iter (+/- 7,357)
test bench_put_lazy ... bench:      46,131 ns/iter (+/- 3,932)

22% improvement on get, 38% improvement on put.

I have implemented the primary cache api but not all. I wanted to get a feedback of how likely you would be to merge this before writing more code.
Please let me know if you are open to merging this and I will complete the api.

@behzadnouri behzadnouri force-pushed the lazy-eviction branch 2 times, most recently from cb82a39 to 78846fd Compare November 25, 2022 19:42
The commit implements a variant of LRU cache with lazy eviction:
* Each entry maintains an associated ordinal value representing when the
  entry was last accessed.
* The cache is allowed to grow up to 2 times specified capacity with no
  evictions, at which point, the excess entries are evicted based on LRU
  policy resulting in an _amortized_ `O(1)` performance.

In many use cases which can allow the cache to store 2 times capacity
and can tolerate amortized nature of performance, this results in better
average performance as shown by the added benchmarks.

Additionally, with the existing implementation, `.get` requires a
mutable reference `&mut self`. In a multi-threaded setting, this
requires an exclusive write-lock on the cache even on the read path,
which can exacerbate lock contentions.
With lazy eviction, the ordinal values can be updated using atomic
operations, allowing shared lock for lookups.
@jeromefroe
Copy link
Owner

Hey @behzadnouri 👋 This is awesome! I would definitely be open to merging once the API is finished – then users of the library can choose which implementation suits them best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants