-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix gossip and tx size limits for the merge #2686
Conversation
As a further input on this, I think that once statelessness becomes a reality, raising the gas limit by a significant factor will be quite realistic. |
presets/mainnet/merge.yaml
Outdated
@@ -2,8 +2,8 @@ | |||
|
|||
# Execution | |||
# --------------------------------------------------------------- | |||
# 2**20 (= 1,048,576) | |||
MAX_BYTES_PER_TRANSACTION: 1048576 | |||
# 2**24 (= 16,777,216) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go from 2**20
to 2**30
(instead of 2**24
) we add 10 hashes work (of 64 bytes input) per transaction to mix-in zero-hashes. At 1000 transactions per block, that's 10.000 hashes. The participation registry in altair is mostly re-merkleized (sparse and many changes, condensed leaf nodes), and at 250k validators (mainnet today) (packed uint8, 32 per leaf) that translates to (250000/32)*2-1=15624
hashes. So this compares to adding another one of those in terms of performance cost. Worth the future-compatibility I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's for MAX_BYTES_PER_TRANSACTION
You also get some depth via increading MAX_TRANSACTIONS_PER_PAYLOAD
(which you probably want to increase in concert). But, correct me if I'm wrong, this doesn't scale the same way with number of TXs. You essentially get a single mix-in per depth so just +X hashes when moving from 2**N
to 2**(N+X)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2^30 seems crazy? 1GB!
but it does firmly get us out of where we might realistically bump into which is a reasonable design goal, imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated constants based on this convo. check it out
bump on review @protolambda @dankrad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another edge case that we should make clients aware of is that now that valid application blocks can technically reach the 10 MiB limit, clients should maintain the same limit carefully. Say if one client applies 10 MiB to the size including envelope related bytes, and the other without, then the block cannot get processed by some of the clients via gossip, and there could be a temporary chain-split.
specs/merge/p2p-interface.md
Outdated
@@ -77,8 +94,7 @@ Alias `block = signed_beacon_block.message`, `execution_payload = block.body.exe | |||
- _[REJECT]_ The execution payload block hash is not equal to the parent hash -- | |||
i.e. `execution_payload.block_hash != execution_payload.parent_hash`. | |||
- _[REJECT]_ The execution payload transaction list data is within expected size limits, | |||
the data MUST NOT be larger than the SSZ list-limit, | |||
and a client MAY be more strict. | |||
the data MUST NOT be larger than the SSZ list-limit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This diff is confusing. Clients should be more strict than the SSZ list-limit, as that limit is higher than the gossip size limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I see that.
It is a MUST but the other gossip restriction is tighter. I'm loosely in favor of leaving as is so that the restriction is essentially min(max_gossip_size, SSZ_limit)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the condition. It's purely extraneous
specs/merge/p2p-interface.md
Outdated
|
||
| Name | Value | Description | | ||
|---|---|---| | ||
| `GOSSIP_MAX_SIZE_MERGE` | `10 * 2**20` (= 10,485,760, 10 MiB) | The maximum allowed size of uncompressed gossip messages starting at the Merge upgrade | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should we have a new configurable setting (i.e. have a _MERGE
suffix here) instead of just overwriting already existing one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is generally how we handle configuration value updates to be able to handle and reason about fork boundaries more simply
See discussion in config README -- https://github.com/ethereum/consensus-specs/tree/dev/configs#forking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The phase0 spec defines MAX_CHUNK_SIZE: 2**20
, which applies to all responses in req-resp. Thus also applies to the beacon_blocks_by_range
and beacon_blocks_by_root
topics, limiting historical sync of blocks to 1 MiB per block. This constant is loosely defined as "network configuration", not generally part of the chain-config. It can be overwritten in-place safely to any value, if clients properly implement the SSZ-size limits for other request/response type. So we should define the change of MAX_CHUNK_SIZE
, and remind the Merge implementers to apply type-specific limits (10 MiB on the metadata-req/resp can be much worse than on blocks, since it may be rate-limited or scored differently).
Reflect changes to max transaction size in #2686.
This PR addresses two issues:
Transaction
byte size allow for actual bounds within current gas-limits and much higherExecutionPayload
and to be compatible with network limits in geth/pow chain today.These issues were originally pointed out by @dankrad here -- https://github.com/ethereum/consensus-specs/pull/2607/files#r711053462. Please see there for more context and discussion
Discussion points for PR:
ExecutionPayload
, thenMAX_BYTES_PER_TRANSACTION
of2**24
can support a gas limit of up to ~270M (2**24 * 16
). This seems safe at ~10x the current gaslimit, but considering that this is a consensus value that would require a hard-fork to upgrade and that we have "safe" limits in the p2p spec, should we instead push this limit to support a a higher gas-limit (e.g. use2**26
which would support a 1B gas limit)? The cost here is extra depth in theTransaction
object merkle tree which would result in a few more hashes per materialization in the normal case.MAX_TRANSACTIONS_PER_PAYLOAD
is currently2**14
. At a minimum gas cost of 21000 gas per transaction (simple send), this supports up to a ~344M gas limit. so on the same order as the2**24
limit onMAX_BYTES_PER_TRANSACTION
. If we adjust max bound based on discussion point (1), we should similarly up this amount to be on the same order.EDIT:
After discussion of the above two pounts,
MAX_BYTES_PER_TRANSACTION
is increased to2**30
andMAX_TRANSACTIONS_PER_PAYLOAD
is increased to2**20
. This ensures that both datastructures when used in the extreme can handle gas limits of ~20B. This is 1000x greater than the gas limit today and thus, barring incredible/unforseen circumstances this allows for growth without concern.Some discussion on the "cost" involved in hashing here
#2686 (comment)