Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Safety of MAX_BLOBS_PER_BLOCK, BLOB_SIDECAR_SUBNET_COUNT = 6 #32

Open
dapplion opened this issue Jan 11, 2024 · 1 comment
Open

Safety of MAX_BLOBS_PER_BLOCK, BLOB_SIDECAR_SUBNET_COUNT = 6 #32

dapplion opened this issue Jan 11, 2024 · 1 comment

Comments

@dapplion
Copy link
Member

dapplion commented Jan 11, 2024

Gnosis has reduced the MAX_BLOB_GAS_PER_BLOCK = 2 blobs EIP-4844 param due to its faster slot times.

| MAX_BLOB_GAS_PER_BLOCK | 262144 |

However the consensus specs define two independent variables that limit blobs in different ways. When possible we prefer not to modify consensus variables that would require custom code in consensus implementations, so the same codebase can work for Ethereum and Gnosis networks. This variables are:

variable value type
`MAX_BLOBS_PER_BLOCK 6 CL preset
`BLOB_SIDECAR_SUBNET_COUNT 6 CL config

So: Is it safe to have NOT reduce MAX_BLOBS_PER_BLOCK, BLOB_SIDECAR_SUBNET_COUNT?

Current usage

  1. MAX_BLOBS_PER_BLOCK is used in state transition to enforce the maximum of blobs, independent of MAX_BLOB_GAS_PER_BLOCK
def process_execution_payload(state: BeaconState, body: BeaconBlockBody, execution_engine: ExecutionEngine) -> None:
    ...
    # [New in Deneb:EIP4844] Verify commitments are under limit
    assert len(body.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK
  1. MAX_BLOBS_PER_BLOCK is used in ReqResp to compute MAX_REQUEST_BLOB_SIDECARS = MAX_REQUEST_BLOCKS_DENEB * MAX_BLOBS_PER_BLOCK. MAX_REQUEST_BLOB_SIDECARS is used to limit /eth2/beacon_chain/req/blob_sidecars_by_root/1/ and /eth2/beacon_chain/req/blob_sidecars_by_range/1/ protocol requests:

The response MUST contain no more than count * MAX_BLOBS_PER_BLOCK blob sidecars.

  1. MAX_BLOBS_PER_BLOCK is used in gossip topic beacon_block to limit the count of blob_kzg_commitments:

[REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer -- i.e. validate that len(body.signed_beacon_block.message.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK

  1. MAX_BLOBS_PER_BLOCK is used in gossip topic blob_sidecar_{subnet_id} to upper bound the index field:

[REJECT] The sidecar's index is consistent with MAX_BLOBS_PER_BLOCK -- i.e. blob_sidecar.index < MAX_BLOBS_PER_BLOCK.

  1. BLOB_SIDECAR_SUBNET_COUNT is used to compute at which subnet each block_sidecar.index has to be published to:
def compute_subnet_for_blob_sidecar(blob_index: BlobIndex) -> SubnetID:
    return SubnetID(blob_index % BLOB_SIDECAR_SUBNET_COUNT)

Consequences of MAX_BLOBS_PER_BLOCK > MAX_BLOB_GAS_PER_BLOCK:

  1. A block with too much commitments will be accepted by this initial consensus condition. If the block has too much blobs it will be rejected by execution validation. If the length of versioned_hashes does not match the count of blobs, the block is rejected.

  2. 3x more blobs may be requested and returned. Slightly higher bandwidth cost to nodes but not a concern.

  3. A block that will become invalid on 2) may be initially accepted and propagated. Slightly higher DoS vector but not a concern?

  4. One could publish a blob sidecar with index > 2, and it will pass the condition highlighted in 5). However to pass the second condition of the topic, the actual index in the blob sidecar has to also be > 2. Then the condition checking verify_blob_sidecar_inclusion_proof would fail unless the proposer has published a block that includes too many blobs. The proposer can temporarily increase the count of blobs being propagated in the network from 2 to 6 at the expense of losing a block proposal.

  5. Nodes may subscribe to 4 subnet topics that will never publish or broadcast any messages. Could be a problem for scoring? No, since blob publication is a function of demand, nodes can't expect a specific throughput per subnet. Thus a situation where a blob is never propagated on subnet 3 is indistinguishable from a network period of no blob activity.

Conclusions

Not reducing MAX_BLOBS_PER_BLOCK will allow to broadcast invalid data over p2p, but at the cost of the proposer not having its block accepted. There are not other concerns.

@pawanjay176
Copy link

I think this is safe to do.

My main concern was that invalid blocks (blocks having kzg_commitments.len() > 2 ) would continue getting gossiped since the gossip conditions are still satisfied.
Here, the worst thing an attacker can do is make nodes verify and forward invalid blocks/blobs. But this isn't really a DoS vector since the disproportionate cost of this is the attacker missing the proposal slot.

I'm trying to think of some fork choice attacks here that takes advantage of the fact that invalid blocks/blobs are gossiped around the network but cannot think of anything
cc @realbigsean

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants