Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading cache implementation? #90

Open
zzbennett opened this issue Oct 6, 2020 · 2 comments
Open

Loading cache implementation? #90

zzbennett opened this issue Oct 6, 2020 · 2 comments

Comments

@zzbennett
Copy link

It would be useful to support a loading cache like Guava's LoadingCache

I'm currently using freecache in production (thank you for this useful library btw!). I'm running into an issue that arises in the following scenario:

  1. Many concurrent reads for the same key
  2. Short TTL (1-3 minutes)
  3. Puts are very expensive

Currently, we have logic like this:

value, err := cache.Get([]byte(key))
if err != nil { //Cache miss
        cerr := cache.Set([]byte(key), client.ExpensiveCall(key))
} else { //Cache hit
        return value
}

When the key expires, we call client.ExpensiveCall(key) to get the value, which can take several seconds to finish. The issue is, when we have many concurrent gets we have a spike in calls to client.ExpensiveCall(key) because the value takes several seconds to load into the cache.

With a loading cache, when a key does not exist in the map the first Get call blocks while the value is loaded into the map and subsequent calls to the same key are also blocked until the value is loaded. For us this would guarantee only one call to client.ExpensiveCall(key) is made when the value expires.

We will probably fork the repo and implement this functionality for our use case, but I'm curious to see if this has come up before or if there is some reason the functionality doesn't exist in this cache implementation (besides simply no one having gotten around to implementing it yet 🙂). Happy to submit a PR.

@zzbennett
Copy link
Author

zzbennett commented Oct 7, 2020

I suppose the main limitation is that there are only 255 locks/segments, so blocking on loading one key would unnecessarily block reads/writes for other keys unless we have a lock per key which defeats the purpose of this library. Possible solution could be to allow clients to configure the number of segments to lower the chance of lock contention with the trade-off being higher memory usage (although the size parameter still controls total memory usage so maybe this is not an issue). Or perhaps separate locks for reads and writes so only write operations would lead to potential lock contention.

@coocood
Copy link
Owner

coocood commented Oct 7, 2020

Maybe singlefight https://pkg.go.dev/golang.org/x/sync/singleflight can solve your problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants