-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(tree): adjust cross-block cache config #14125
perf(tree): adjust cross-block cache config #14125
Conversation
// * First level: 150k accounts * 48B = 0.007 GB | ||
// * Second level: 150k accounts * (192B + 1000 slots * 48B) = 6.732 GB | ||
// | ||
// Total maximum: 15.7 GB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this a lot for regular machines,
maybe we consider lowering the defaults
since this is the cache max, we will reach this eventually, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we consider lowering the defaults
yeah, i believe we can get it much lower with the same perf, will give it a try
since this is the cache max, we will reach this eventually, right?
this is the most pessimistic scenario, with all the addresses stored having also 1000 slots used, probably most of them will use less
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, reduced the limits without losing perf, I've updated the charts in the description with the results for these values. also added a note about the worst-case scenario, now it is 9.4Gb.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's try this
pending @Rjected
@mattsse thx a lot! just pushed changes for implementing size-aware eviction, with this we have a much more predictable memory usage being at the same time more flexible and keeping very good perf metrics. Now there is a capacity limit for each cache and, as long as it is not reached, the available space can be used to include entries dictated by requests, for instance a contract can have a lot more storage slots in the cache than the rest without increasing the overall memory footprint. This works with a I've also included settings for time-based evictions with explicit values for ttl and idle evictions. The limits currently set for each cache are: storage: 8Gb, accounts: 0.5Gb, bytecode: 0.5Gb, with this we get very good perf metrics, will update the charts in the description shortly. these are currently hardcoded values, but now it is much more clear what they mean (as opposed to a number of entries in the cache that need to be translated into actual memory usage) and can for instance be configured from the cli. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great, this all makes sense, and should properly bound everything by size
Based on top of #13769
Switched to size-aware eviction to have deterministic maximum cache sizes, done for all the cross-block caches.
Added a test to estimate the sizes of the different elements of the hierarchical structure of the storage cache, applied the result values to the cache's
weigher
closure.Added
time_to_live
andtime_to_idle
to all cross-block caches to keep in memory only the most relevant data.With these values we get similar performance compared with the previous ones, during an execution of
reth-bench
for a range of 1000 blocks: