You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be useful to support a loading cache like Guava's LoadingCache
I'm currently using freecache in production (thank you for this useful library btw!). I'm running into an issue that arises in the following scenario:
Many concurrent reads for the same key
Short TTL (1-3 minutes)
Puts are very expensive
Currently, we have logic like this:
value, err := cache.Get([]byte(key))
if err != nil { //Cache miss
cerr := cache.Set([]byte(key), client.ExpensiveCall(key))
} else { //Cache hit
return value
}
When the key expires, we call client.ExpensiveCall(key) to get the value, which can take several seconds to finish. The issue is, when we have many concurrent gets we have a spike in calls to client.ExpensiveCall(key) because the value takes several seconds to load into the cache.
With a loading cache, when a key does not exist in the map the first Get call blocks while the value is loaded into the map and subsequent calls to the same key are also blocked until the value is loaded. For us this would guarantee only one call to client.ExpensiveCall(key) is made when the value expires.
We will probably fork the repo and implement this functionality for our use case, but I'm curious to see if this has come up before or if there is some reason the functionality doesn't exist in this cache implementation (besides simply no one having gotten around to implementing it yet 🙂). Happy to submit a PR.
The text was updated successfully, but these errors were encountered:
I suppose the main limitation is that there are only 255 locks/segments, so blocking on loading one key would unnecessarily block reads/writes for other keys unless we have a lock per key which defeats the purpose of this library. Possible solution could be to allow clients to configure the number of segments to lower the chance of lock contention with the trade-off being higher memory usage (although the size parameter still controls total memory usage so maybe this is not an issue). Or perhaps separate locks for reads and writes so only write operations would lead to potential lock contention.
It would be useful to support a loading cache like Guava's LoadingCache
I'm currently using freecache in production (thank you for this useful library btw!). I'm running into an issue that arises in the following scenario:
Currently, we have logic like this:
When the key expires, we call
client.ExpensiveCall(key)
to get the value, which can take several seconds to finish. The issue is, when we have many concurrent gets we have a spike in calls toclient.ExpensiveCall(key)
because the value takes several seconds to load into the cache.With a loading cache, when a key does not exist in the map the first
Get
call blocks while the value is loaded into the map and subsequent calls to the same key are also blocked until the value is loaded. For us this would guarantee only one call toclient.ExpensiveCall(key)
is made when the value expires.We will probably fork the repo and implement this functionality for our use case, but I'm curious to see if this has come up before or if there is some reason the functionality doesn't exist in this cache implementation (besides simply no one having gotten around to implementing it yet 🙂). Happy to submit a PR.
The text was updated successfully, but these errors were encountered: