Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PeerDAS fork-choice, validator custody and parameter changes #3779

Open
wants to merge 25 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions configs/mainnet.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -157,11 +157,13 @@ WHISK_PROPOSER_SELECTION_GAP: 2
# EIP7594
NUMBER_OF_COLUMNS: 128
MAX_CELLS_IN_EXTENDED_MATRIX: 768
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 32
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 128
MAX_REQUEST_DATA_COLUMN_SIDECARS: 16384
SAMPLES_PER_SLOT: 8
CUSTODY_REQUIREMENT: 1
TARGET_NUMBER_OF_PEERS: 70
SAMPLES_PER_SLOT: 16
CUSTODY_REQUIREMENT: 4
VALIDATOR_CUSTODY_REQUIREMENT: 8
BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET: 32000000000 # 2**5 * 10**9 (= 32,000,000,000)
TARGET_NUMBER_OF_PEERS: 100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that most clients don't use this config value, so I guess this is more like a reference / recommendation?

#3766 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have created a PR to remove TARGET_NUMBER_OF_PEERS as a config variable


# [New in Electra:EIP7251]
MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: 128000000000 # 2**7 * 10**9 (= 128,000,000,000)
Expand Down
10 changes: 6 additions & 4 deletions configs/minimal.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -156,11 +156,13 @@ WHISK_PROPOSER_SELECTION_GAP: 1
# EIP7594
NUMBER_OF_COLUMNS: 128
MAX_CELLS_IN_EXTENDED_MATRIX: 768
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 32
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 128
asn-d6 marked this conversation as resolved.
Show resolved Hide resolved
MAX_REQUEST_DATA_COLUMN_SIDECARS: 16384
SAMPLES_PER_SLOT: 8
CUSTODY_REQUIREMENT: 1
TARGET_NUMBER_OF_PEERS: 70
SAMPLES_PER_SLOT: 16
CUSTODY_REQUIREMENT: 4
VALIDATOR_CUSTODY_REQUIREMENT: 8
BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET: 32000000000 # 2**5 * 10**9 (= 32,000,000,000)
TARGET_NUMBER_OF_PEERS: 100

# [New in Electra:EIP7251]
MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: 64000000000 # 2**6 * 10**9 (= 64,000,000,000)
Expand Down
10 changes: 10 additions & 0 deletions pysetup/spec_builders/eip7594.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,15 @@ def imports(cls, preset_name: str):
return f'''
from eth2spec.deneb import {preset_name} as deneb
'''


@classmethod
def sundry_functions(cls) -> str:
jtraglia marked this conversation as resolved.
Show resolved Hide resolved
return """
def retrieve_column_sidecars(beacon_block_root: Root,
require_peer_sampling: bool) -> Sequence[DataColumnSidecar]:
return []
"""

@classmethod
def hardcoded_custom_type_dep_constants(cls, spec_object) -> str:
Expand All @@ -27,3 +36,4 @@ def hardcoded_func_dep_presets(cls, spec_object) -> Dict[str, str]:
return {
'KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH': spec_object.preset_vars['KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH'].value,
}

21 changes: 16 additions & 5 deletions specs/_features/eip7594/das-core.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,15 +75,17 @@ The following values are (non-configurable) constants used throughout the specif

| Name | Value | Description |
| - | - | - |
| `DATA_COLUMN_SIDECAR_SUBNET_COUNT` | `32` | The number of data column sidecar subnets used in the gossipsub protocol |
| `DATA_COLUMN_SIDECAR_SUBNET_COUNT` | `128` | The number of data column sidecar subnets used in the gossipsub protocol |

### Custody setting

| Name | Value | Description |
| - | - | - |
| `SAMPLES_PER_SLOT` | `8` | Number of `DataColumnSidecar` random samples a node queries per slot |
| `CUSTODY_REQUIREMENT` | `1` | Minimum number of subnets an honest node custodies and serves samples from |
| `TARGET_NUMBER_OF_PEERS` | `70` | Suggested minimum peer count |
| `SAMPLES_PER_SLOT` | `16` | Number of `DataColumnSidecar` random samples a node queries per slot |
| `CUSTODY_REQUIREMENT` | `4` | Minimum number of subnets an honest node custodies and serves samples from |
| `VALIDATOR_CUSTODY_REQUIREMENT` | `8` | Minimum number of subnets an honest node with validators attached custodies and serves samples from |
| `BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET` | `Gwei(32 * 10**9)` | Balance increment corresponding to one additional subnet to custody |
| `TARGET_NUMBER_OF_PEERS` | `100` | Suggested minimum peer count |

fradamt marked this conversation as resolved.
Show resolved Hide resolved
### Containers

Expand Down Expand Up @@ -269,7 +271,16 @@ def get_extended_sample_count(allowed_failures: uint64) -> uint64:

### Custody requirement

Each node downloads and custodies a minimum of `CUSTODY_REQUIREMENT` subnets per slot. The particular subnets that the node is required to custody are selected pseudo-randomly (more on this below).
Each node *without attached validators* downloads and custodies a minimum of `CUSTODY_REQUIREMENT` subnets per slot. A node with validators attached downloads and custodies a higher minimum of subnets per slot, determined by `get_validators_custody_requirement(state, validator_indices)`. Here, `state` is the current `BeaconState` and `validator_indices` is the list of indices corresponding to validators attached to the node. Any node with at least one validator attached, and with the sum of the balances of all attached validators being `total_node_balance`, downloads and custodies `total_node_balance // BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET` subnets per slot, with a minimum of `VALIDATOR_CUSTODY_REQUIREMENT` and of course a maximum of `DATA_COLUMN_SIDECAR_SUBNET_COUNT`.

```python
def get_validators_custody_requirement(state: BeaconState, validator_indices: Sequence[ValidatorIndex]) -> uint64:
total_node_balance = sum(state.balances[index] for index in validator_indices)
count = total_node_balance // BALANCE_PER_ADDITIONAL_CUSTODY_SUBNET
return min(max(count, VALIDATOR_CUSTODY_REQUIREMENT), DATA_COLUMN_SIDECAR_SUBNET_COUNT)
```

The particular subnets that the node is required to custody are selected pseudo-randomly (more on this below).

A node *may* choose to custody and serve more than the minimum honesty requirement. Such a node explicitly advertises a number greater than `CUSTODY_REQUIREMENT` via the peer discovery mechanism -- for example, in their ENR (e.g. `custody_subnet_count: 4` if the node custodies `4` subnets each slot) -- up to a `DATA_COLUMN_SIDECAR_SUBNET_COUNT` (i.e. a super-full node).

Expand Down
191 changes: 191 additions & 0 deletions specs/_features/eip7594/fork-choice.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
# EIP-7594 -- Fork Choice

## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->

- [Introduction](#introduction)
- [Helpers](#helpers)
- [Modified `is_data_available`](#modified-is_data_available)
- [New `is_chain_available`](#new-is_chain_available)
- [Modified `get_head`](#modified-get_head)
- [New `is_peer_sampling_required`](#new-is_peer_sampling_required)
- [Updated fork-choice handlers](#updated-fork-choice-handlers)
- [Modified `on_block`](#modified-on_block)
- [Pull-up tip helpers](#pull-up-tip-helpers)
- [Modified `compute_pulled_up_tip`](#modified-compute_pulled_up_tip)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->

## Introduction

This is the modification of the fork choice accompanying EIP-7594.

### Helpers

#### Modified `is_data_available`

```python
def is_data_available(beacon_block_root: Root, require_peer_sampling: bool=False) -> bool:
# `retrieve_column_sidecars` is implementation and context dependent, replacing `retrieve_blobs_and_proofs`.
# For the given block root, it returns all column sidecars to custody, and, if `require_peer_sampling` is `True`,
# also all column sidecars selected for peer sampling, or raises an exception if they are not available. The p2p
# network does not guarantee sidecar retrieval outside of `MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS` epochs.
column_sidecars = retrieve_column_sidecars(beacon_block_root, require_peer_sampling)
fradamt marked this conversation as resolved.
Show resolved Hide resolved
return all(
verify_data_column_sidecar_kzg_proofs(column_sidecar)
for column_sidecar in column_sidecars
)
```

#### New `is_chain_available`

*Note*: if `beacon_block_root` is not found in `store.blocks`, we return `True`. Note that this never
results in adding to `store` a block with an unavailable ancestor, because blocks are not added to
`store` unless their whole ancestry has already has been. This situation could only present itself
around Genesis, where `current_justified_checkpoint` and `parent_root` are initially not set.

```python
def is_chain_available(store: Store, beacon_block_root: Root) -> bool:
"""
Checks if all ancestors of `beacon_block_root` within the custody period are
available, as determined by `is_data_available` with peer sampling enabled.
"""
if beacon_block_root not in store.blocks:
return True
block = store.blocks[beacon_block_root]
block_epoch = compute_epoch_at_slot(block.slot)
current_epoch = get_current_store_epoch(store)
if block_epoch + MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS <= current_epoch:
return True
parent_root = block.parent_root
return (
is_data_available(beacon_block_root, require_peer_sampling=True)
and is_chain_available(store, parent_root)
)
```

#### Modified `get_head`

*Note*: children of the current `head` are required to be available in order to be considered by the fork-choice.
For blocks from the current or previous epoch (which cannot yet be finalized), this is established through
a custody check, while for blocks older than two epochs through a full peer sampling check.

```python
def get_head(store: Store) -> Root:
# Get filtered block tree that only includes viable branches
blocks = get_filtered_block_tree(store)
# Execute the LMD-GHOST fork choice
head = store.justified_checkpoint.root
while True:
# Get available children for the current slot
children = [
root for (root, block) in blocks.items()
if (
block.parent_root == head
and is_data_available(
root,
require_peer_sampling=is_peer_sampling_required(store, block.slot)
)
)
]
if len(children) == 0:
return head
# Sort by latest attesting balance with ties broken lexicographically
# Ties broken by favoring block with lexicographically higher root
head = max(children, key=lambda root: (get_weight(store, root), root))
```

#### New `is_peer_sampling_required`

```python
def is_peer_sampling_required(store: Store, slot: Slot):
return compute_epoch_at_slot(slot) + 2 <= get_current_store_epoch(store)
```

## Updated fork-choice handlers

### Modified `on_block`

*Note*: The blob data availability check is removed and replaced with an availability
check on the justified checkpoint in the "pulled up state" of the block, which is
the state after applying `process_justification_and_finalization`.

```python
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
"""
Run ``on_block`` upon receiving a new block.
"""
block = signed_block.message
# Parent block must be known
assert block.parent_root in store.block_states
# Make a copy of the state to avoid mutability issues
state = copy(store.block_states[block.parent_root])
# Blocks cannot be in the future. If they are, their consideration must be delayed until they are in the past.
assert get_current_slot(store) >= block.slot

# Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
assert block.slot > finalized_slot
# Check block is a descendant of the finalized block at the checkpoint finalized slot
finalized_checkpoint_block = get_checkpoint_block(
store,
block.parent_root,
store.finalized_checkpoint.epoch,
)
assert store.finalized_checkpoint.root == finalized_checkpoint_block

# Check the block is valid and compute the post-state
block_root = hash_tree_root(block)
state_transition(state, signed_block, True)

# [New in EIP7594] Do not import the block if its unrealized justified checkpoint is not available
pulled_up_state = state.copy()
process_justification_and_finalization(pulled_up_state)
assert is_chain_available(store, pulled_up_state.current_justified_checkpoint.root)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we shoud also check the current justified checkpoint for the non-pulled-up: is_chain_available(store, state.current_justified_checkpoint.root).
This is because if a block B is from the current epoch and it is chosed as the head, then the voting source corresponds to the current justied checkpoint for then non-pulled-up state, i.e., store.block_states[B].current_justified_checkpoint.


# Add new block to the store
store.blocks[block_root] = block
# Add new state for this block to the store
store.block_states[block_root] = state

# Add block timeliness to the store
time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT
is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
store.block_timeliness[hash_tree_root(block)] = is_timely

# Add proposer score boost if the block is timely and not conflicting with an existing block
is_first_block = store.proposer_boost_root == Root()
if is_timely and is_first_block:
store.proposer_boost_root = hash_tree_root(block)

# Update checkpoints in store if necessary
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)

# Eagerly compute unrealized justification and finality.
compute_pulled_up_tip(store, pulled_up_state, block_root)
```

#### Pull-up tip helpers

##### Modified `compute_pulled_up_tip`

*Note*: Modified to take `pulled_up_state`, the block's state after applying `processing_justification_and_finalization`.
The application of `processing_justification_and_finalization` now happens in `on_block`.

```python
def compute_pulled_up_tip(store: Store, pulled_up_state: BeaconState, block_root: Root) -> None:
Copy link
Contributor

@saltiniroberto saltiniroberto Aug 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this function been modified only to avoid executing process_justification_and_finalization twice (as it is already executed at line 146 now) or is there any other reason?
If there is no other reason, I think it is better not to modify this function as the original function is more self-contained and therefore, I think, more readable (e.g. one does not need to track back the value assigned to pulled_up_state)

store.unrealized_justifications[block_root] = pulled_up_state.current_justified_checkpoint
unrealized_justified = pulled_up_state.current_justified_checkpoint
unrealized_finalized = pulled_up_state.finalized_checkpoint
update_unrealized_checkpoints(store, unrealized_justified, unrealized_finalized)

# If the block is from a prior epoch, apply the realized values
block_epoch = compute_epoch_at_slot(store.blocks[block_root].slot)
current_epoch = get_current_store_epoch(store)
if block_epoch < current_epoch:
update_checkpoints(store, unrealized_justified, unrealized_finalized)
```
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@

from eth2spec.test.context import (
spec_state_test,
with_deneb_and_later,
with_phases,
)

from eth2spec.test.helpers.constants import DENEB

from eth2spec.test.helpers.block import (
build_empty_block_for_next_slot,
)
Expand Down Expand Up @@ -34,7 +36,7 @@ def get_block_with_blob(spec, state, rng=None):
return block, blobs, blob_kzg_proofs


@with_deneb_and_later
@with_phases([DENEB])
@spec_state_test
def test_simple_blob_data(spec, state):
rng = Random(1234)
Expand Down Expand Up @@ -69,7 +71,7 @@ def test_simple_blob_data(spec, state):
yield 'steps', test_steps


@with_deneb_and_later
@with_phases([DENEB])
@spec_state_test
def test_invalid_incorrect_proof(spec, state):
rng = Random(1234)
Expand Down Expand Up @@ -97,7 +99,7 @@ def test_invalid_incorrect_proof(spec, state):
yield 'steps', test_steps


@with_deneb_and_later
@with_phases([DENEB])
@spec_state_test
def test_invalid_data_unavailable(spec, state):
rng = Random(1234)
Expand Down Expand Up @@ -125,7 +127,7 @@ def test_invalid_data_unavailable(spec, state):
yield 'steps', test_steps


@with_deneb_and_later
@with_phases([DENEB])
@spec_state_test
def test_invalid_wrong_proofs_length(spec, state):
rng = Random(1234)
Expand Down Expand Up @@ -153,7 +155,7 @@ def test_invalid_wrong_proofs_length(spec, state):
yield 'steps', test_steps


@with_deneb_and_later
@with_phases([DENEB])
@spec_state_test
def test_invalid_wrong_blobs_length(spec, state):
rng = Random(1234)
Expand Down