-
Notifications
You must be signed in to change notification settings - Fork 20k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eth/catalyst: in zero-period dev mode and txs/withdrawals pending, commit as many non-empty blocks as possible #30104
eth/catalyst: in zero-period dev mode and txs/withdrawals pending, commit as many non-empty blocks as possible #30104
Conversation
…available, commit as many blocks as possible until we have no more transactions/withdrawals to include
Needs a few explainer comments and then I will move it back from draft. |
…o mutex contention.
@@ -96,7 +96,7 @@ func TestSimulatedBeaconSendWithdrawals(t *testing.T) { | |||
defer subscription.Unsubscribe() | |||
|
|||
// generate some withdrawals | |||
for i := 0; i < 20; i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case should have been like this originally. The withdrawal queue has a max size of 10.
Previously, this passed because there was no contention between adding withdrawals and committing blocks. Now that access to withdrawals is gated behind a mutex, the contention causes all withdrawals in the loop to be added without simultaneously sealing blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In zero period mode, it doesn't seem possible for withdrawals to outstrip block production. The API loop will keep reading from the queue until it's empty, producing blocks. Maybe the buffered queue isn't an ideal data structure here, but I am struggling to see under what conditions a withdrawal would be stuck in the channel. So I'm not sure why there are so many modifications to the withdrawal logic.
For newTxs
, I can see how a user may submit several txs and they cannot all be included in block production. However, the solution of forcing the engine to build blocks until it builds an empty block seems crude. Using the Stats()
function for TxPool
seems like a much more straightforward way of determining if there are further pending txs.
There also should be some way of determining if a build is currently active. In case the txpool issues NewTxsEvent
while the simulated beacon is building a block, it will possibly force any empty block to be built immediately after. This is avoided in the PR with the allowEmpty
parameter, but similarly to above, I think bashing the engine to build blocks is a crude solution and it would be better to organize the top level logic in the module so that it deals with these races without expending unneeded effort generating blocks it's going to throw away.
I think this also needs to solve #29475. |
It's a side-effect of the sealing loop logic in this PR. The last (empty) block sealed needs to be reverted, and any withdrawals that were included re-inserted back into the queue (at the front). I think #30264 is a better approach. |
…30264) closes #29475, replaces #29657, #30104 Fixes two issues. First is a deadlock where the txpool attempts to reorg, but can't complete because there are no readers left for the new txs subscription. Second, resolves a problem with on demand mode where txs may be left pending when there are more pending txs than block space. Co-authored-by: Martin Holst Swende <martin@swende.se>
Right now, we only commit one block per notification of pending txs/withdrawals. This could leave dangling executable txs if all pending txs cannot fit in a single block and no subsequent new txs/withdrawals are received.