-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reusable module cache #4621
base: master
Are you sure you want to change the base?
Reusable module cache #4621
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC this still needs to be rebased on top of p23 module and have the host change merged, so I haven't looked too closely at the rust parts (IIUC currently we just don't pass the module cache to the host at all).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this generally looks pretty good, the producer/compiler thread divide is a good idea! I think the producer thread needs some changes though.
To maintain the cache, I think we need to
- Compile all live contract modules on startup.
- Compile new contract modules during upload/restore.
- Evict entries from the cache when the underlying WASM is evicted via State Archival eviction scans.
I think decoupling cache gc from eviction events is going to be expensive. If you have some background task that checks if a given module is live or not, it will have to read from many levels of the BL to determine if whatever BucketEntry it's looking at is the most up to date. Evicting from the cache when we do state archival evictions will remove this additional read amplification (since the archival scan has to this this multi level search already) and is simpler to maintain cache validity too.
The drawback to this is intial cache generation is a little more expensive, as we're limited to a single producer thread that has to iterate throught the BL in-order and keep track of seen keys. If we don't have a proper gc, we can't add any modules that have already been evited since they would cause a memory leak.
Looking at startup as a whole, we have a bunch of tasks that are BL disk read dominated, namely Bucket Apply, BucketIndex, and p23's upcoming Soroban state cache. Bucket Index can process all Buckets in parallel, but Bucket Apply, Soroban State cache, and Module Cache all require a single thread iterating the BL in order due to the outdated keys issue (in the future we could do this in parallel where each level marks it's "last seen key" and lower levels can't make progress beyond all their parents last seen key, but that's too invloved for v1).
Given that we're adding a bunch of work on the startup path and Horizon/RPC has indicated a need for faster startup times in the past, I think it makes sense to condense the Bucket Apply, Soroban state cache population, and Module cache producer thread into a single Work that makes a one shot pass on the BucketList. Especially in a captive-core instance, which we still run in our infra on EBS last I checked, I assume we're going to be disk bound even with the compilation step, so if we do compilation in the same pass as BucketApply we might just get it for free.
I don't think this needs to be in 23.0 (other than the memory leak issue), but if we have this in mind and make the initial version a little more friendly with the other Work tasks that happen on startup, it'll be easier to optimize this later.
c1ffdf7
to
89d85c2
Compare
89d85c2
to
dd5b7eb
Compare
Updated with the following:
|
Updated with fix for test failure when running with background close, as well as all review comments addressed (I think!) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but CI is still failing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a bug in the ApplyBuckets
work path. Currently, we use the BucketList snapshot to compile WASM. Problem is, we call compileAllContractsInLedger
in AssumeStateWork
. After calling app.getBucketManager().assumeState
, the raw BucketList is up to date, but we don't actually update the BucketList snapshots until after AssumeStateWork
is down (see LedgerManagerImpl.cpp:392
). I'm pretty sure we'd currently only compile state based on the genesis ledger when we hit the apply bucket path.
Ah! You're right. That path is actually only there to try to make compilation happen "post catchup", even though as it stands it is also triggered on the startup I think the fix here is to remove the call to |
Hmm I think we still have to compile before catch-up though (or at least before applying ledgers in catch-up)? I assume (perhaps incorrectly) that we have asserts that compilation is always cached and will remove the functionality to lazily compile modules during tx invocation in p23. TX replay would break these invariants. |
Hmmmm maybe! I am unclear on where we should compile during catchup, then. Perhaps in |
(In terms of asserts: there is no assert in the code currently that "every contract is cached" before we call a txn; fortunately or unfortunately, soroban will not-especially-gracefully degrade to making throwaway modules as needed. We could put an assert on the core side, and I guess we should since it represents a bug-that-turns-into-a-performance-hazard) |
Yeah, that makes sense to me. Compilation is basically part of assuming the current ledger state, so seems like a reasonable place to put it. |
Done. |
This is the stellar-core side of a soroban change to surface the module cache for reuse.
On the core side we:
SorobanModuleCache
to the core-side Rust code, which holds asoroban_env_host::ModuleCache
for each host protocol version we support caching modules for (there is currently only one but there will be more in the future)CoreCompilationContext
type tocontract.rs
which carries aBudget
and logs errors to the core console logging system. This is sufficient to allow operating thesoroban_env_host::ModuleCache
from outside theHost
.SorobanModuleCache
into the host function invocation path that core calls during transactions.SorobanModuleCache
in theLedgerManagerImpl
that is long-lived, spans ledgers.SharedModuleCacheCompiler
that does a multithreaded load-all-contracts / populate-the-module-cache, and call this on startup when theLedgerManagerImpl
restores its LCL.The main things left to do here are:
p23
soroban submoduleI think that's .. kinda it? The reusable module cache is just "not passed in" on p22 and "passed in" on p23, so it should just start working at the p23 boundary.