-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add BEEFY finality pallet with basic functionality #1606
Conversation
Please keep in mind that messaging pallet has no proof verification logic - that is done outside of the pallet. So you can simply make another adapter for this new pallet and configure messaging pallet to use it instead of GRANDPA pallet adapter. I don't see any significant changes here.
Re proof size ( I'm not insisting here - just sharing my thoughts about the proof. You should make your own decision here :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of questions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
f4bb151
to
202068d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review still in progress, but leaving some ideas
pub fn submit_commitment( | ||
origin: OriginFor<T>, | ||
commitment: BridgedBeefySignedCommitment<T, I>, | ||
validator_set: BridgedBeefyAuthoritySet<T, I>, | ||
mmr_leaf: Box<BridgedBeefyMmrLeaf<T, I>>, | ||
mmr_proof: BridgedMmrProof<T, I>, | ||
) -> DispatchResult |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in the first version we deploy we should just save the current validator_set
(full set of BEEFY validator keys) in storage instead of using block transaction space here on each call.
We know from MMR Leaf contents on which block the set changes and have its "metadata" (validator set keys mmr root) to validate the input.
This call will then only accept a commitment
if either:
- it's for a BEEFY mandatory block (session-boundary block) and full new validator set is also passed as argument (after validating it against known metadata, we update storage with new set),
- or, it's for a non-mandatory BEEFY block, and
validator_set
param isNone
and the one from storage is used (we also verify the one from storage matches current commitment metadata).
pub fn submit_commitment( | |
origin: OriginFor<T>, | |
commitment: BridgedBeefySignedCommitment<T, I>, | |
validator_set: BridgedBeefyAuthoritySet<T, I>, | |
mmr_leaf: Box<BridgedBeefyMmrLeaf<T, I>>, | |
mmr_proof: BridgedMmrProof<T, I>, | |
) -> DispatchResult | |
pub fn submit_commitment( | |
origin: OriginFor<T>, | |
commitment: BridgedBeefySignedCommitment<T, I>, | |
validator_set: Option<BridgedBeefyAuthoritySet<T, I>>, | |
mmr_leaf: Box<BridgedBeefyMmrLeaf<T, I>>, | |
mmr_proof: BridgedMmrProof<T, I>, | |
) -> DispatchResult |
or we could expose different submit_mandatory_commitment()
call that takes validator_set
while submit_commitment()
doesn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do this. The current approach is not efficient at all so we'll definitely need to change it before deploying. Would it be ok to track this as part of #1608 ?
modules/beefy/src/lib.rs
Outdated
//! (e.g. storage proofs, transaction inclusion proofs, etc.). There are two options to do that: | ||
//! | ||
//! - the cheap option only works when the proof is based on some recent header. Then the submitter | ||
//! may relay on the fact that the pallet is storing hashes of the most recent bridged headers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
//! may relay on the fact that the pallet is storing hashes of the most recent bridged headers. | |
//! may rely on the fact that the pallet is storing hashes of the most recent bridged headers. |
Also, this is not really the case in the current version of the PR - we can leave this doc in place if we plan to do it like this later, otherwise let's change the doc to be accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the comment for the moment since I'm not sure yet what option we'll chose here. We'll update the documentation after we make the choice.
authority_set.validators().iter().zip(commitment.signatures.iter()).enumerate() | ||
{ | ||
if let Some(sig) = maybe_sig { | ||
if BeefyVerify::<BridgedBeefyCommitmentHasher<T, I>>::verify(sig, &msg, authority) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For future work, I'd be interested if there were a more efficient way to verify
that doesn't require re-hashing the message for each signature..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I think we can add a verify_prehashed()
method to the BeefyVerify
trait. I created the following issue to track this: #1656 .
Green light to merge from me 👍 |
* pallet-bridge-beefy: initial commit
* pallet-bridge-beefy: initial commit
General context
Related to: #2469
This PR is a continuation of: #1367
This PR introduces a new pallet that can handle BEEFY commitments. For the beginning I would just like to push an initial version of this pallet with basic functionality and then introducing more functionality, improvements and addressing the known shortcomings incrementally in future PRs. More details below:
Prerequisites
This changes are compiled against a custom branch fo substrate. Before merging it we need #1597 to be merged. And also to add paritytech/substrate#12516 to our substrate branch.#1597 is merged and I cherry-picked paritytech/substrate#12516 into the
sv-locked-for-gav-xcm-v3-and-bridges-2
branch. So it should be ready for merge now.Known shortcomings
Leaf version
For the moment we're not checking the MMR Leaf version. I'm not sure if we have to, or if it will be handled automatically by the encode/decode logic.
The stored data
Right now, the pallet stores a map of
block_number => { parent_number_and_hash, mmr_root }
which has one entry for each of the latestCommitmentsToKeep
imported commitments.There are 2 issues with this:
FinalityApi::best_finalized()
method returns a full header (it calls thebest_finalized()
method exposed by the GRANDPA finality pallet). This method is used by the relayersparse_finalized_storage_proof(hash: BridgedBlockHash<T, I>, storage_proof: sp_trie::StorageProof, ...)
method exposed by the GRANDPA finality pallet which:header.state_root()
in order to be able to check storage proofsSo we might want to:
In order to stay compatible with this logic. Otherwise we'll need significant changes to support both GRANDPA and BEEFY.
On the other hand, with BEEFY we can prove any previous block if we have the MMR root and the caller provides a valid proof. So this would be a valid alternative to
parse_finalized_storage_proof(hash: BridgedBlockHash<T, I>, storage_proof: sp_trie::StorageProof, ...)
. We could have a similar method that provides the header, a mmr proof and a storage proof. Not sure what's the right trade-off here.Probably storing header would be the easiest solution for the beginning, but I would like to address this in a future PR.
Unimplemented
Related to points 2. and 4. here:
The pallet is not storing validator public keys, but right now it's receiving the entire list of validators in
submit_commitment()
.I'm not sure if providing a merkle proof for each validator is efficient, even if we could work only with 1/3 signatures. I think a proof would require log(validators.len()) hashes. So for example for 1000 validators:
Current approach (sending all the validators and signatures): 1000 validators + 1000 signatures = 1000 * 33 bytes + 1000 * 64 bytes = 97k bytes
Sending only 1/3 validators and signatures + proofs = 333 validators + 333 signatures + 333 proofs = 333 * 33 bytes + 333 * 64 bytes + 333 * 10 * 32 bytes = 139k bytes
Please correct me if I'm wrong or if I'm missing something.