-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PeerDAS fork-choice, validator custody and parameter changes #3779
base: dev
Are you sure you want to change the base?
Changes from 14 commits
2b455c1
7c72ec8
9159bd0
56e9d38
54157a2
f98241b
e4e30f3
6723efd
40c55e6
fcca8fd
107dda6
e9398f6
22456c8
0f1e471
ef64924
82103dd
4f3c150
908f37d
1acba8b
a2db18c
19804a3
1613d2e
62006c9
3f5ba2e
8dbd874
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -73,15 +73,18 @@ We define the following Python custom types for type hinting and readability: | |
|
||
| Name | Value | Description | | ||
| - | - | - | | ||
| `DATA_COLUMN_SIDECAR_SUBNET_COUNT` | `32` | The number of data column sidecar subnets used in the gossipsub protocol | | ||
| `DATA_COLUMN_SIDECAR_SUBNET_COUNT` | `128` | The number of data column sidecar subnets used in the gossipsub protocol | | ||
|
||
### Custody setting | ||
|
||
| Name | Value | Description | | ||
| - | - | - | | ||
| `SAMPLES_PER_SLOT` | `8` | Number of `DataColumn` random samples a node queries per slot | | ||
| `CUSTODY_REQUIREMENT` | `1` | Minimum number of subnets an honest node custodies and serves samples from | | ||
| `TARGET_NUMBER_OF_PEERS` | `70` | Suggested minimum peer count | | ||
| `SAMPLES_PER_SLOT` | `16` | Number of `DataColumn` random samples a node queries per slot | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There is no such thing as There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nice catch! |
||
| `CUSTODY_REQUIREMENT` | `4` | Minimum number of subnets an honest node custodies and serves samples from | | ||
| `VALIDATOR_CUSTODY_REQUIREMENT` | `8` | Minimum number of subnets an honest node with validators attached custodies and serves samples from | | ||
| `BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET` | `Gwei(32 * 10**9)` | Balance increment corresponding to one additional subnet to custody | | ||
| `TARGET_NUMBER_OF_PEERS` | `100` | Suggested minimum peer count | | ||
|
||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### Containers | ||
|
||
|
@@ -201,7 +204,16 @@ def get_data_column_sidecars(signed_block: SignedBeaconBlock, | |
|
||
### Custody requirement | ||
|
||
Each node downloads and custodies a minimum of `CUSTODY_REQUIREMENT` subnets per slot. The particular subnets that the node is required to custody are selected pseudo-randomly (more on this below). | ||
Each node *without attached validators* downloads and custodies a minimum of `CUSTODY_REQUIREMENT` subnets per slot. A node with validators attached downloads and custodies a higher minimum of subnets per slot, determined by `get_validators_custody_requirement(state, validator_indices)`. Here, `state` is the current `BeaconState` and `validator_indices` is the list of indices corresponding to validators attached to the node. Any node with at least one validator attached, and with the sum of the balances of all attached validators being `total_node_balance`, downloads and custodies `total_node_balance // BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET` subnets per slot, with a minimum of `VALIDATOR_CUSTODY_REQUIREMENT` and of course a maximum of `DATA_COLUMN_SIDECAR_SUBNET_COUNT`. | ||
|
||
```python | ||
def get_validators_custody_requirement(state: BeaconState, validator_indices: Sequence[ValidatorIndex]) -> uint64: | ||
total_node_balance = sum(state.balances[index] for index in validator_indices) | ||
count = total_node_balance // BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET | ||
return min(max(count, VALIDATOR_CUSTODY_REQUIREMENT), DATA_COLUMN_SIDECAR_SUBNET_COUNT) | ||
``` | ||
|
||
The particular subnets that the node is required to custody are selected pseudo-randomly (more on this below). | ||
|
||
A node *may* choose to custody and serve more than the minimum honesty requirement. Such a node explicitly advertises a number greater than `CUSTODY_REQUIREMENT` via the peer discovery mechanism -- for example, in their ENR (e.g. `custody_subnet_count: 4` if the node custodies `4` subnets each slot) -- up to a `DATA_COLUMN_SIDECAR_SUBNET_COUNT` (i.e. a super-full node). | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,193 @@ | ||
# EIP-7594 -- Fork Choice | ||
|
||
## Table of contents | ||
<!-- TOC --> | ||
<!-- START doctoc generated TOC please keep comment here to allow auto update --> | ||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> | ||
|
||
- [Introduction](#introduction) | ||
- [Helpers](#helpers) | ||
- [`get_custody_parameters`](#get_custody_parameters) | ||
- [`get_sampling_columns`](#get_sampling_columns) | ||
- [`retrieve_column_sidecars`](#retrieve_column_sidecars) | ||
- [`is_data_available`](#is_data_available) | ||
- [`is_chain_available`](#is_chain_available) | ||
- [`get_head`](#get_head) | ||
- [`is_peer_sampling_required`](#is_peer_sampling_required) | ||
- [Updated fork-choice handlers](#updated-fork-choice-handlers) | ||
- [`on_block`](#on_block) | ||
- [Pull-up tip helpers](#pull-up-tip-helpers) | ||
- [`compute_pulled_up_tip`](#compute_pulled_up_tip) | ||
|
||
<!-- END doctoc generated TOC please keep comment here to allow auto update --> | ||
<!-- /TOC --> | ||
|
||
## Introduction | ||
|
||
This is the modification of the fork choice accompanying EIP-7594 | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### Helpers | ||
|
||
#### `get_custody_parameters` | ||
|
||
`def get_custody_parameters() -> Tuple[NodeID, uint64]` | ||
|
||
#### `get_sampling_columns` | ||
|
||
|
||
`def get_sampling_columns() -> Sequence[ColumnIndex]` | ||
|
||
|
||
#### `retrieve_column_sidecars` | ||
|
||
`def retrieve_column_sidecars(beacon_block_root: Root, columns_to_retrieve: Sequence[ColumnIndex]) -> Sequence[DataColumnSidecar]` | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
#### `is_data_available` | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```python | ||
def is_data_available(beacon_block_root: Root, require_peer_sampling: bool=False) -> bool: | ||
column_sidecars = retrieve_column_sidecars(beacon_block_root, require_peer_sampling) | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
return all( | ||
verify_data_column_sidecar_kzg_proofs(column_sidecar) | ||
for column_sidecar in column_sidecars | ||
) | ||
``` | ||
|
||
#### `is_chain_available` | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```python | ||
def is_chain_available(store: Store, beacon_block_root: Root) -> bool: | ||
if beacon_block_root not in store.blocks: | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# Deal with Genesis edge cases, where current_justified_checkpoint and parent_root are not set | ||
return True | ||
block = store.blocks[beacon_block_root] | ||
block_epoch = compute_epoch_at_slot(block.slot) | ||
current_epoch = get_current_store_epoch(store) | ||
if block_epoch + MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS <= current_epoch: | ||
return True | ||
parent_root = block.parent_root | ||
return ( | ||
is_data_available(beacon_block_root, require_peer_sampling=True) | ||
and is_chain_available(store, parent_root) | ||
) | ||
|
||
``` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should delete the blank line. |
||
|
||
#### `get_head` | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```python | ||
def get_head(store: Store) -> Root: | ||
# Get filtered block tree that only includes viable branches | ||
blocks = get_filtered_block_tree(store) | ||
# Execute the LMD-GHOST fork choice | ||
head = store.justified_checkpoint.root | ||
while True: | ||
# Get available children for the current slot | ||
children = [ | ||
root for (root, block) in blocks.items() | ||
if ( | ||
block.parent_root == head | ||
and is_data_available( | ||
root, | ||
require_peer_sampling=is_peer_sampling_required(store, block.slot) | ||
) | ||
) | ||
] | ||
if len(children) == 0: | ||
return head | ||
# Sort by latest attesting balance with ties broken lexicographically | ||
# Ties broken by favoring block with lexicographically higher root | ||
head = max(children, key=lambda root: (get_weight(store, root), root)) | ||
``` | ||
|
||
#### `is_peer_sampling_required` | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```python | ||
def is_peer_sampling_required(store: Store, slot: Slot): | ||
return compute_epoch_at_slot(slot) + 2 <= get_current_store_epoch(store) | ||
``` | ||
|
||
## Updated fork-choice handlers | ||
|
||
### `on_block` | ||
|
||
*Note*: The blob data availability check is removed and replaced with an availability | ||
check on the on the justified checkpoint in the "pulled up state" of the block, after | ||
applying `process_justification_and_finalization`. | ||
|
||
```python | ||
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None: | ||
""" | ||
Run ``on_block`` upon receiving a new block. | ||
""" | ||
block = signed_block.message | ||
# Parent block must be known | ||
assert block.parent_root in store.block_states | ||
# Make a copy of the state to avoid mutability issues | ||
state = copy(store.block_states[block.parent_root]) | ||
# Blocks cannot be in the future. If they are, their consideration must be delayed until they are in the past. | ||
assert get_current_slot(store) >= block.slot | ||
|
||
# Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor) | ||
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch) | ||
assert block.slot > finalized_slot | ||
# Check block is a descendant of the finalized block at the checkpoint finalized slot | ||
finalized_checkpoint_block = get_checkpoint_block( | ||
store, | ||
block.parent_root, | ||
store.finalized_checkpoint.epoch, | ||
) | ||
assert store.finalized_checkpoint.root == finalized_checkpoint_block | ||
|
||
# Check the block is valid and compute the post-state | ||
block_root = hash_tree_root(block) | ||
state_transition(state, signed_block, True) | ||
|
||
# [New in EIP7594] Do not import the block if its unrealized justified checkpoint is not available | ||
pulled_up_state = state.copy() | ||
process_justification_and_finalization(pulled_up_state) | ||
assert is_chain_available(store, pulled_up_state.current_justified_checkpoint.root) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that we shoud also check the current justified checkpoint for the non-pulled-up: |
||
|
||
# Add new block to the store | ||
store.blocks[block_root] = block | ||
# Add new state for this block to the store | ||
store.block_states[block_root] = state | ||
|
||
# Add block timeliness to the store | ||
time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT | ||
is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT | ||
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval | ||
store.block_timeliness[hash_tree_root(block)] = is_timely | ||
|
||
# Add proposer score boost if the block is timely and not conflicting with an existing block | ||
is_first_block = store.proposer_boost_root == Root() | ||
if is_timely and is_first_block: | ||
store.proposer_boost_root = hash_tree_root(block) | ||
|
||
# Update checkpoints in store if necessary | ||
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint) | ||
|
||
# Eagerly compute unrealized justification and finality. | ||
compute_pulled_up_tip(store, pulled_up_state, block_root) | ||
``` | ||
|
||
#### Pull-up tip helpers | ||
|
||
##### `compute_pulled_up_tip` | ||
fradamt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
Modified to take `pulled_up_state`, the block's state after applying `processing_justification_and_finalization`. | ||
The application of `processing_justification_and_finalization` now happens in `on_block`. | ||
|
||
```python | ||
def compute_pulled_up_tip(store: Store, pulled_up_state: BeaconState, block_root: Root) -> None: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Has this function been modified only to avoid executing |
||
store.unrealized_justifications[block_root] = pulled_up_state.current_justified_checkpoint | ||
unrealized_justified = pulled_up_state.current_justified_checkpoint | ||
unrealized_finalized = pulled_up_state.finalized_checkpoint | ||
update_unrealized_checkpoints(store, unrealized_justified, unrealized_finalized) | ||
|
||
# If the block is from a prior epoch, apply the realized values | ||
block_epoch = compute_epoch_at_slot(store.blocks[block_root].slot) | ||
current_epoch = get_current_store_epoch(store) | ||
if block_epoch < current_epoch: | ||
update_checkpoints(store, unrealized_justified, unrealized_finalized) | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that most clients don't use this config value, so I guess this is more like a reference / recommendation?
#3766 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have created a PR to remove
TARGET_NUMBER_OF_PEERS
as a config variable