-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concerns about eth1 voting in the context of caching #1537
Comments
why
|
I'm not following how
SGTM
|
I like it. This is close to what we do in Prysm, except for the tie breaking for vote decisions. We don't consider anyone elses vote at the moment when determining our eth1data vote. I am a bit concerned about caching and eth1 reorgs. If everyone caches the same way and no one invalidates their cache on eth1 re-org, that coud be a problem. Could you elaborate on the assumption that eth1 re-orgs cannot affect this cache method? |
Glad to hear that!
The present voting mechanism (when considered alongside the validator on-boarding logic) makes an assumption that eth1 will never re-org a block that is more than 128 blocks deep. This is fine, but my gripe is that eth1 voting is structured in such a way that a client is most likely going to need to keep a cache that occasionally includes all blocks up to the current eth1 head. The mechanism I have proposed addresses my gripe by making a slightly different assumption; instead of saying "eth1 will never re-org a block that is more than 128 blocks deep", it says "eth1 will never re-org a block that is more than 128 times the expected block time (15s) deep". In other words, we judge a blocks depth in the eth1 chain by The important part about my mechanism is that it only requires the client to cache blocks that are at least Note: the assumption is not necessarily "eth1 will never re-org a block 32 minutes deep", it's more along the lines of "if eth1 re-orgs past 32 min then we need an extra-protocol solution to patch eth2". |
Closed via #1553 |
I think the current eth1 voting mechanism has some undesirable properties when we consider that staking eth2 clients should cache eth1 blocks. They should cache because:
First I will state two undesirable properties of the current system in the context of caching and then suggest a simpler system.
Undesirable property 1: Clients must cache the eth1 chain all the way up to the head
Consider the first block proposer in the eth1 voting period (
slot % SLOTS_PER_ETH1_VOTING_PERIOD == 0
). In order to calculateget_eth1_data(distance)
it needs to know the block number of the eth1 block at the start of the voting period (now). That block is the head of the eth1 chain.This is the primary problem I have and it has two effects:
Undesirable property 2: You need to cache all the way back to the current eth1_data
In order to cast a vote (i.e., not trigger an exception in the spec), a node must have in their cache all descendants of the block represented by
state.eth1_data
(at the start of the current voting period). So, the cache grows linearly with the time since a successful eth1 voting period.Additionally, if a node wants their cache to be safe in the case of an eth2 re-org, they should cache all the way back to the eth1_data in the last finalized block. Therefore, the cache also grows linearly with the time since finalization.
My proposal:
Below is some rough Python code that I think is minimal and viable. I'm not convinced this should be the final solution, but it's a starting point at least.
It has the following properties that the present solution does not:
ETH1_FOLLOW_DISTANCE * SECONDS_PER_SLOT
without contacting an eth1 node and still vote perfectly.ETH1_FOLLOW_DISTANCE * SECONDS_PER_SLOT
.Basically, solution makes the following changes:
all_eth1_data
). This frees us from undesirable property 2.WRT (2), it's not clear to me why we bother with
all_eth1_data
. I have some ideas, but I'd be keen to hear the original motivations.The text was updated successfully, but these errors were encountered: