Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consensus parameter storage and updates #1198

Closed
jchappelow opened this issue Jan 6, 2025 · 8 comments · Fixed by #1213
Closed

consensus parameter storage and updates #1198

jchappelow opened this issue Jan 6, 2025 · 8 comments · Fixed by #1213

Comments

@jchappelow
Copy link
Member

This issue pertains to:

  • storing current consensus parameters, which may change through some yet-undetermined mechanisms
  • defining some mechanisms for coordinating changes to consensus parameters
  • recording consensus parameter updates in block data, such that a synchronizing node will reach the correct consensus parameters

This issue is a working draft of ideas.

Storing current params

When a node commits a block, the current parameters can be recorded to json file or a special bucket of the blockstore. The latter is probably more appropriate for consistency, but the former might simplify debugging.

Mechanisms to communicate and agree on changes to consensus parameters

Some independent ways are possible:

  1. height-based changes defined in genesis config, implemented as either hard coded changes (requiring a patch), or with a corresponding "param_updates": {} object for generalized changes to the entire parameter set.
  2. transaction-based consensus param updates resolution, also generalized in the sense that the resolution would specify the entire "param_updates" set that is proposed. Implementation? Define resolution body and type.

We need to define the updates type, probably as a subset of genesis config. Need to support updating to the zero value of any given parameter (i.e. pointer fields and/or changed bool flags).

Recording updates in block data

This refers to a field of the block header or commit info (currently block/tx results, vote info, etc.) such as ConsensusParamUpdates []ConsensusUpdates.

CometBFT did this so that the parameter updates can be dictated in an application-specific manner. In our case, it seems that the block processor can compute the updates i.e. a transaction-based approach (see above) is used to alter the parameters. Thus recording updates in the block header may not be useful.

The one scenario where updates in the (proposed) block header may be useful is an added validation mechanism. That is, when a validator receives a block proposal, part of its approval criterion (ACK/nACK) is to ensure the node's computed updates agree with the updates from the leader. However, given the proposal is sent prior to the leader executing the block, these updates must be from the execution of the previous block. As such, it seems that this is more appropriately recorded in the commit info. It could be in both however. Thoughts?

@jchappelow
Copy link
Member Author

I'm inclined to start working out the update structure, working on the resolution, and finding a place to store in block store. It might help to get some concrete changes going.
We'll talk in the meeting tomorrow, but any thoughts/reservations right now? @charithabandi @brennanjl

@brennanjl
Copy link
Collaborator

All looks good to me, IMO I think you can get started on it.

My one reservation (which you noted) is that I am unsure if it is necessary to store the updates in the block header. If we have our own way of storing it (in the DB, or just some file stored in the root directory), I'm not totally sure why we need to also store it in the block header. If the network needed to coordinate to change the block proposer, they could just edit their local state (probably using a utility we add to kwil-admin), and once enough have it switched, the network continues.

That is, when a validator receives a block proposal, part of its approval criterion (ACK/nACK) is to ensure the node's computed updates agree with the updates from the leader.

I do agree that this is probably pretty useful. Say we just treated the consensus parameters as part of application state; wouldn't them changing affect the apphash, and thus we don't need some extra check as part of the block proposal?

I don't have a super strong opinion on any of this, just trying to help think it out.

@jchappelow
Copy link
Member Author

I do agree that this is probably pretty useful. Say we just treated the consensus parameters as part of application state; wouldn't them changing affect the apphash, and thus we don't need some extra check as part of the block proposal?

I think that would work, yes. I'll see if there's a need to explicitly record them in the block, but I don't see it now. At most it could go in the (segregated) commit info.

@charithabandi
Copy link
Contributor

I do think there is no strong motivation to have these updates in the block header as both height based and tx based are updated deterministically and replay-able. However I think, Leader update should be in the block header, along with the signatures of all the validators agreeing to this update, so that any staggered validators or new joiners, can make the necessary updates such as setting the new leader and updating validator set and process the block.

Currently, we treat the Consensus Params as a part of the App State and any changes to it, will reflect in the app hash. This should suffice to ensure that the nodes are running with the correct consensus params.

@brennanjl
Copy link
Collaborator

However I think, Leader update should be in the block header, along with the signatures of all the validators agreeing to this update, so that any staggered validators or new joiners, can make the necessary updates such as setting the new leader and updating validator set and process the block.

That's a good point. A magically changed leader will cause a node syncing blocks to fork.

@jchappelow
Copy link
Member Author

However I think, Leader update should be in the block header, along with the signatures of all the validators agreeing to this update, so that any staggered validators or new joiners, can make the necessary updates such as setting the new leader and updating validator set and process the block.

That's a good point. A magically changed leader will cause a node syncing blocks to fork.

Tracking leader replacement in this issue: #1112 (comment)
Replay does require the replacement recorded, so I am planning on handling leader update with other network params updates.

@jchappelow
Copy link
Member Author

jchappelow commented Jan 7, 2025

I do think there is no strong motivation to have these updates in the block header as both height based and tx based are updated deterministically and replay-able. However I think, Leader update should be in the block header, along with the signatures of all the validators agreeing to this update, so that any staggered validators or new joiners, can make the necessary updates such as setting the new leader and updating validator set and process the block.

Sounds like we are in agreement here.

Currently, we treat the Consensus Params as a part of the App State and any changes to it, will reflect in the app hash. This should suffice to ensure that the nodes are running with the correct consensus params.

I agree we should, but unless I'm misunderstanding, we don't include updates at each block as part of each block's apphash, yet. Referencing nextHash := bp.nextAppHash(bp.appHash, types.Hash(appHash), valUpdatesHash, accountsHash, txResultsHash). Did you mean the genesis apphash, or am I missing something that's already there for each block?

@charithabandi
Copy link
Contributor

charithabandi commented Jan 7, 2025

Ah Geez, nextAppHash should take in the consensus updates too(updates or the current values) , I thought I had added that, but probably missed it out of myriad of things.

@charithabandi charithabandi linked a pull request Jan 17, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants