Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restrict best LC update collection to canonical blocks #3553

Merged
merged 27 commits into from
Jan 8, 2025

Conversation

etan-status
Copy link
Contributor

@etan-status etan-status commented Nov 21, 2023

Currently, the best LC update for a sync committee period may refer to blocks that have later been orphaned, if they rank better than canonical blocks according to is_better_update. This was done because the most important task of the light client sync protocol is to track the correct next_sync_committee. However, practical implementation is quite tricky because existing infrastructure such as fork choice modules can only be reused in limited form when collecting light client data. Furthermore, it becomes impossible to deterministically obtain the absolute best LC update available for any given sync committee period, because orphaned blocks may become unavailable.

For these reasons, LightClientUpdate should only be served if they refer to data from the canonical chain as selected by fork choice. This also assists efforts for a reliable backward sync in the future.

Currently, the best LC update for a sync committee period may refer to
blocks that have later been orphaned, if they rank better than canonical
blocks according to `is_better_update`. This was done because the most
important task of the light client sync protocol is to track the correct
`next_sync_committee`. However, practical implementation is quite tricky
because existing infrastructure such as fork choice modules can only be
reused in limited form when collecting light client data. Furthermore,
it becomes impossible to deterministically obtain the absolute best LC
update available for any given sync committee period, because orphaned
blocks may become unavailable.

For these reasons, `LightClientUpdate` should only be served if they
refer to data from the canonical chain as selected by fork choice.
This also assists efforts for a reliable backward sync in the future.
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Nov 21, 2023
Simplify best `LightClientUpdate` collection by tracking only canonical
data instead of tracking the best update across all branches within the
sync committee period.

- ethereum/consensus-specs#3553
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Nov 21, 2023
Simplify best `LightClientUpdate` collection by tracking only canonical
data instead of tracking the best update across all branches within the
sync committee period.

- ethereum/consensus-specs#3553
Copy link
Member

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This complicates implementations for little to no benefit, for that reason I oppose this somewhat. However I see the need for canonical updates to have a backfill protocol

@etan-status
Copy link
Contributor Author

Could you please elaborate on the complication aspect? It led to quite a bit of simplification in case of Nimbus:

One notable aspect is, that even with the old system, you'd need to track separate branches in a proper implementation, they are just on a per-period basis. As in, track the best LightClientUpdate for each (period, current_sync_committee, next_sync_committee). Only when finality advances, this can be simplified to track the best LightClientUpdate for each (period). This can be tested with the minimal preset, where non-finality of an entire sync committee period is feasible.

With the new system, that remains the same, but you track the best LightClientUpdate for each non-finalized block; same way, how we track many other aspects for the purpose of fork choice.

So, similar to regular fork choice (which is already present):

  • When a new block is added, compute the data and attach it to the memory structure.
  • When a new head is selected, read from the memory structure and persist to database.
  • On finality, purge from the memory structure.
    And, because the best LightClientUpdate doesn't change that often, can deduplicate the memory using a reference count (or, just use a ref object and have the language runtime itself deal with the count).

Regarding "little to no benefit", I think having canonical data made available on the network allows better reasoning.

  • No other API exposes orphaned data (unless maybe when explicitly asked for using a by-root request).
  • It also avoids complications when feeding the data into portal network, because the different nodes won't end up storing different versions of the data in the regular case.
  • Furthermore, it unlocks future backfill protocols for syncing the canonical history without recomputing it from the local database. Such a backfill protocol can include proofs of canonical history with the data, to ensure that, for example, someone isn't just serving an arbitrary history that ends up at the same head sync committee, and have your node then serve that possibly malicious early history (leading to the verifiable head sync committee) to others.
  • Finally, it allows providing a reference implementation with pyspecs, to ensure that most BNs are computing the same history for the same chain.
  • Other implementations are not disallowed, it's a should not, not a shall not.

@dapplion
Copy link
Member

dapplion commented Dec 1, 2023

From offline chat, would be great to define a direction for a backfill spec to make the motivation for this PR stronger

@etan-status
Copy link
Contributor Author

From offline chat, would be great to define a direction for a backfill spec to make the motivation for this PR stronger

https://hackmd.io/@etan-status/electra-lc

etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Mar 3, 2024
Introduce a test runner for upcoming EF test suites related to canonical
light client data collection.

- ethereum/consensus-specs#3553
Copy link
Member

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this restriction is sensible to unlock backfilling in the future. SHOULD NOT lang is okay to give time to Lodestar to migrate and Lighthouse to catch up.

Thanks for the thorough test! Def helpful to develop this

@etan-status
Copy link
Contributor Author

minimal.zip
Extra test vectors based on v1.4.0-beta.7

etan-status added a commit to etan-status/consensus-specs that referenced this pull request Mar 4, 2024
Beacon nodes can only compute light client data locally if they have the
corresponding `BeaconState` available. This is not the case for blocks
before the initially synced checkpoint state. The p2p-interface defines
endpoints to sync light client data, but it only supports forward sync.

To enable beacon nodes to backfill light client data, we must ensure
that a malicious peer cannot convince us of fraudulent data. While it
is possible to verify light client data against the locally backfilled
blocks, blocks are not necessarily available anymore on libp2p as they
are subject to `MIN_EPOCHS_FOR_BLOCK_REQUESTS`. Light client data stays
relevant for more than 5 months, and without validating it against local
block data it is impossible to distinguish canonical light client data
from fraudulent light client data that eventually culminates in a shared
history; the old periods in that case could still be manipulated.
Furthermore, agreeing on canonical data improves caching performance and
is relevant, e.g., for the portal network.

To support efficient proof that a `LightClientUpdate` is canonical, it
is proposed to minimally extend the `BeaconState` to track the best
`SyncAggregate` of the current and previous sync committee period,
according to an implementation-independent ranking function.
The proposed ranking function is compatible with what consensus nodes
implementing ethereum#3553 are
already making available across libp2p and REST transports.
It is based on and compatible with the `is_better_update` function in
`specs/altair/light-client/sync-protocol.md`.

There are three minor differences to `is_better_update`:

1. `is_better_update` runs in the LC, so runs without fork choice.
   It needs extra conditions to prefer older data over newer data.
   The `BeaconState` ranking function can use simpler logic.
2. The LC is always initialized from a post-Altair finalized checkpoint.
   This assumption does not hold in theoretical edge cases, requiring an
   extra guard for `ALTAIR_FORK_EPOCH` in the `BeaconState` function.
3. `is_better_update` has to deal with BNs serving incomplete data while
   they are still backfilling. This is not the case with `BeaconState`.

Once the data is available in the `BeaconState`, a light client data
backfill protocol could be defined that serves, for past periods:

1. A `LightClientUpdate` from requested `period` + 1 that proves
   that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root`
   at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the
   corresponding `LightClientHeader` + Merkle multi-proof for the
   block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`,
   the `current_sync_committee_branch` + Merkle proof for constructing
   `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s
   `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists,
   + Merkle proof that its `sync_aggregate` + `signature_slot` is
   selected as the canonical best one in (1)'s
   `attested_header.beacon.state_root`.

Only the proof in (6) depends on `BeaconState` tracking the best
light client data. This modification would enshrine the logic of a
subset of `is_better_update`, but does not require adding any
`LightClientXyz` data structures to the `BeaconState`.
@etan-status
Copy link
Contributor Author

✅ Nimbus 24.2.2 passing the additional tests.

etan-status added a commit to etan-status/consensus-specs that referenced this pull request Mar 4, 2024
Beacon nodes can only compute light client data locally if they have the
corresponding `BeaconState` available. This is not the case for blocks
before the initially synced checkpoint state. The p2p-interface defines
endpoints to sync light client data, but it only supports forward sync.

To enable beacon nodes to backfill light client data, we must ensure
that a malicious peer cannot convince us of fraudulent data. While it
is possible to verify light client data against the locally backfilled
blocks, blocks are not necessarily available anymore on libp2p as they
are subject to `MIN_EPOCHS_FOR_BLOCK_REQUESTS`. Light client data stays
relevant for more than 5 months, and without validating it against local
block data it is impossible to distinguish canonical light client data
from fraudulent light client data that eventually culminates in a shared
history; the old periods in that case could still be manipulated.
Furthermore, agreeing on canonical data improves caching performance and
is relevant, e.g., for the portal network.

To support efficient proof that a `LightClientUpdate` is canonical, it
is proposed to minimally extend the `BeaconState` to track the best
`SyncAggregate` of the current and previous sync committee period,
according to an implementation-independent ranking function.
The proposed ranking function is compatible with what consensus nodes
implementing ethereum#3553 are
already making available across libp2p and REST transports.
It is based on and compatible with the `is_better_update` function in
`specs/altair/light-client/sync-protocol.md`.

There are three minor differences to `is_better_update`:

1. `is_better_update` runs in the LC, so runs without fork choice.
   It needs extra conditions to prefer older data over newer data.
   The `BeaconState` ranking function can use simpler logic.
2. The LC is always initialized from a post-Altair finalized checkpoint.
   This assumption does not hold in theoretical edge cases, requiring an
   extra guard for `ALTAIR_FORK_EPOCH` in the `BeaconState` function.
3. `is_better_update` has to deal with BNs serving incomplete data while
   they are still backfilling. This is not the case with `BeaconState`.

Once the data is available in the `BeaconState`, a light client data
backfill protocol could be defined that serves, for past periods:

1. A `LightClientUpdate` from requested `period` + 1 that proves
   that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root`
   at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the
   corresponding `LightClientHeader` + Merkle multi-proof for the
   block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`,
   the `current_sync_committee_branch` + Merkle proof for constructing
   `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s
   `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists, +
   Merkle proof that its `sync_aggregate` + `signature_slot` is
   selected as the canonical best one in (1)'s
   `attested_header.beacon.state_root`.

Only the proof in (6) depends on `BeaconState` tracking the best
light client data. This modification would enshrine the logic of a
subset of `is_better_update`, but does not require adding any
`LightClientXyz` data structures to the `BeaconState`.
etan-status added a commit to etan-status/consensus-specs that referenced this pull request Mar 4, 2024
Beacon nodes can only compute light client data locally if they have the
corresponding `BeaconState` available. This is not the case for blocks
before the initially synced checkpoint state. The p2p-interface defines
endpoints to sync light client data, but it only supports forward sync.

To enable beacon nodes to backfill light client data, we must ensure
that a malicious peer cannot convince us of fraudulent data. While it
is possible to verify light client data against the locally backfilled
blocks, blocks are not necessarily available anymore on libp2p as they
are subject to `MIN_EPOCHS_FOR_BLOCK_REQUESTS`. Light client data stays
relevant for more than 5 months, and without validating it against local
block data it is impossible to distinguish canonical light client data
from fraudulent light client data that eventually culminates in a shared
history; the old periods in that case could still be manipulated.
Furthermore, agreeing on canonical data improves caching performance and
is relevant, e.g., for the portal network.

To support efficient proof that a `LightClientUpdate` is canonical, it
is proposed to minimally extend the `BeaconState` to track the best
`SyncAggregate` of the current and previous sync committee period,
according to an implementation-independent ranking function.
The proposed ranking function is compatible with what consensus nodes
implementing ethereum#3553 are
already making available across libp2p and REST transports.
It is based on and compatible with the `is_better_update` function in
`specs/altair/light-client/sync-protocol.md`.

There are three minor differences to `is_better_update`:

1. `is_better_update` runs in the LC, so runs without fork choice.
   It needs extra conditions to prefer older data over newer data.
   The `BeaconState` ranking function can use simpler logic.
2. The LC is always initialized from a post-Altair finalized checkpoint.
   This assumption does not hold in theoretical edge cases, requiring an
   extra guard for `ALTAIR_FORK_EPOCH` in the `BeaconState` function.
3. `is_better_update` has to deal with BNs serving incomplete data while
   they are still backfilling. This is not the case with `BeaconState`.

Once the data is available in the `BeaconState`, a light client data
backfill protocol could be defined that serves, for past periods:

1. A `LightClientUpdate` from requested `period` + 1 that proves
   that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root`
   at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the
   corresponding `LightClientHeader` + Merkle multi-proof for the
   block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`,
   the `current_sync_committee_branch` + Merkle proof for constructing
   `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s
   `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists, +
   Merkle proof that its `sync_aggregate` + `signature_slot` is
   selected as the canonical best one in (1)'s
   `attested_header.beacon.state_root`.

Only the proof in (6) depends on `BeaconState` tracking the best
light client data. This modification would enshrine the logic of a
subset of `is_better_update`, but does not require adding any
`LightClientXyz` data structures to the `BeaconState`.
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Mar 5, 2024
Introduce a test runner for upcoming EF test suites related to canonical
light client data collection.

- ethereum/consensus-specs#3553
@etan-status
Copy link
Contributor Author

@hwwhww anything still blocking this?

@hwwhww hwwhww mentioned this pull request Nov 15, 2024
20 tasks
Copy link
Contributor

@hwwhww hwwhww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, as @etan-status 's usual solid work 🙏

only some suggestions on the test format and naming.

Comment on lines 1122 to 1132
@with_phases(phases=[CAPELLA], other_phases=[DENEB, ELECTRA])
@spec_test
@with_config_overrides({
'DENEB_FORK_EPOCH': 1 * 8 + 4, # SyncCommitteePeriod 1 (+ 4 epochs)
'ELECTRA_FORK_EPOCH': 3 * 8 + 4, # SyncCommitteePeriod 3 (+ 4 epochs)
}, emit=False)
@with_state
@with_matching_spec_config(emitted_fork=ELECTRA)
@with_presets([MINIMAL], reason="too slow")
def test_deneb_electra_reorg_unaligned(spec, phases, state):
yield from run_test_multi_fork(spec, phases, state, DENEB, ELECTRA)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test should be moved to capella/light_client/ folder, as well as test test_sync.py ones.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's handle this in a separate PR to be included in the next release. I started to make these changes but realized it wasn't as straightforward as I thought.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have applied the changes to both data collection and test_sync as requested.

Comment on lines +27 to +39
#### `new_head` execution step

The given block (previously imported) should become head, leading to potential updates to:

- The best `LightClientUpdate` for non-finalized sync committee periods.
- The latest `LightClientFinalityUpdate` and `LightClientOptimisticUpdate`.
- The latest finalized `Checkpoint` (across all branches).
- The available `LightClientBootstrap` instances for newly finalized `Checkpoint`s.

```yaml
{
head_block_root: Bytes32 -- string, hex encoded, with 0x prefix
checks: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you think about aligning with fork choice test format more?

  1. Add "Checks step" description.
  2. Make head_block_root as one of the checking items.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The head_block_root is the actual operation that is done here (that's the new head to select, and the checks describe the result of updating the head).

@etan-status
Copy link
Contributor Author

@hwwhww Have split up the test_sync and test_data_collection into per-fork files.
For the tests that span across forks, they are in the fork that the test is starting at, as in, an altair -> capella test is in altair.
A new decorator factory is defined in line with existing ones to deduplicate tests where possible.
Is this to your liking as is, or want to request more changes?

I'll cross-check the PR against Nimbus test runner to see if all's well and then update with another comment.

@etan-status
Copy link
Contributor Author

Updated generated files in PR description for 09e8f01 (alpha.9 base) and confirmed to pass in Nimbus ✅

@etan-status
Copy link
Contributor Author

etan-status commented Jan 6, 2025

Updated for v1.5.0-alpha.10 and checked against Nimbus.

## EF - Light client - Data collection [Preset: minimal]
+ Light client - Data collection - minimal/altair/light_client/data_collection/pyspec_tests/ OK
+ Light client - Data collection - minimal/bellatrix/light_client/data_collection/pyspec_tes OK
+ Light client - Data collection - minimal/bellatrix/light_client/data_collection/pyspec_tes OK
+ Light client - Data collection - minimal/bellatrix/light_client/data_collection/pyspec_tes OK
+ Light client - Data collection - minimal/capella/light_client/data_collection/pyspec_tests OK
+ Light client - Data collection - minimal/capella/light_client/data_collection/pyspec_tests OK
+ Light client - Data collection - minimal/capella/light_client/data_collection/pyspec_tests OK
+ Light client - Data collection - minimal/deneb/light_client/data_collection/pyspec_tests/l OK
+ Light client - Data collection - minimal/electra/light_client/data_collection/pyspec_tests OK
OK: 9/9 Fail: 0/9 Skip: 0/9

Copy link
Member

@jtraglia jtraglia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. This is quite a large PR, so I'm unable to thoroughly review everything here but from what I can tell it all seems reasonable. I see no obvious issues. Thank you @etan-status!

@jtraglia jtraglia merged commit c086545 into ethereum:dev Jan 8, 2025
23 checks passed
@etan-status etan-status deleted the lc-canonical branch January 8, 2025 18:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants