Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Sassafras Consensus #4600

Closed
wants to merge 86 commits into from
Closed

Sassafras Consensus #4600

wants to merge 86 commits into from

Conversation

sorpaas
Copy link
Member

@sorpaas sorpaas commented Jan 10, 2020

This is a draft implementation of HABE/BADASS/SASSY/SASSAFRAS consensus. Validation logic is mostly there but I've yet written proposing logic and runtime helpers. Eventually, the VRF used also need to switch from schnorrkel to ring-vrf. Not quite ready for review as there're millions of TODOs, but please feel free to take a look and leave grumbles.

Overview

This describes the schnorrkel Sassafras as it is currently written. Note that it will switch to ring-vrf and thus some of the specs will change.

Two VRFs are generated at each block. Ticket VRF, which is used for proof of block production rights, and Post-block VRF, which is used for randomness collection. For ticket VRFs, three epochs are tracked:

  • At epoch N, validators generate several ticket VRFs, and keep those below a threshold.
  • At epoch N+1, validators send VRFProof to another pseudo-random validator, who is expected to broadcast it for inclusion in a block.
  • At epoch N+2, proofs included are sorted in "outside-in" order, and each proposer are required to show that they are the person generated the proof.

Three digests are included in the header:

  • PreDigest: The pre-runtime digest, which contains the proof of ticket VRF and post-block VRF, generated by consensus.
  • PostBlockDescriptor: Tracks ticket VRF commitments to be included in current block, generated by runtime.
  • NextEpochDescriptor: Describes the next epoch, same as in BABE. Note that the enacted validators only start to validate 2 epochs later.

Verifier and Block Import Logic

Pre-runtime digest verification:

  1. Verify pre-runtime digest.
  2. Verify that the slot is increasing, and not in the future.
  3. Check signature.
  4. Check that the ticket VRF is of a valid index in auxiliary.validating.
  5. Check that the ticket VRF is valid.
  6. Check that the post-block VRF is valid.

Post-block digest verification:

  • Push any commitments of ticket VRF.

Next-epoch descriptor digest verification:

  1. Check descriptor validity.
  2. Sort the validating proofs in "outside-in" order.
  3. Push in pool auxiliary.

TODOs

  • Proposing and runtime helpers.
  • Refactor some BABE code in a separate crate for reuse (mostly VRF, threshold, and secondary pre-digest).
  • Figure out a way to implement networking during publishing phrase.

@sorpaas sorpaas added the A3-in_progress Pull request is in progress. No review needed at this stage. label Jan 10, 2020
@rphmeier
Copy link
Contributor

rphmeier commented Jan 13, 2020

Figure out a way to implement networking during publishing phrase.

One of the easiest things to do would be to implement a polite-gossip protocol, where the packet structure is { data, epoch_id, sender, recipient, sender_signature } - data is encrypted to recipient's public key. Recipients attempt to decrypt any messages that are labeled as being sent to them.

We probably need to make sending to 2 different parties slashable to prevent DoS, though. That, or ensure that we never hold more than one message by the same signer for the same epoch in memory. As long as we don't require agreement on the VRFProof sent by a specific validator.

@burdges
Copy link

burdges commented Jan 13, 2020

I added a simple encryption module for schnorrkel keys in https://github.com/w3f/schnorrkel/blob/master/src/aead.rs and we can do similar with other keys. I have not attempted to be as compatible as possible with stuff other people do, but aead crate should help there, and provide in-place encryption modes.

@burdges
Copy link

burdges commented Jan 13, 2020

We probably need to make sending to 2 different parties slashable to prevent DoS, though.

I proposed that H(omega || "WHO") mod num_validators should be the publishing validator in https://github.com/w3f/research/blob/master/docs/papers/sass/sass-2-announce.tex#L52
making any other repeater invalid.

I included the max_winners per block producer that you suggested in https://github.com/w3f/research/blob/master/docs/papers/sass/sass-2-announce.tex#L57 but also max_repeats limits the number each repeater puts into the mempool in https://github.com/w3f/research/blob/master/docs/papers/sass/sass-2-announce.tex#L60

@sorpaas sorpaas requested a review from andresilva January 21, 2020 11:35
@burdges
Copy link

burdges commented Apr 13, 2020

We noticed some games in which validators reuse keys @sorpaas so we should probably use the same key for both VRFs. I'll make further comments in the write up eventually and need to check the code more carefully. We can chat about it if you want of course.

@burdges
Copy link

burdges commented Apr 26, 2020

As #5788 impact how BABE generates future epoch's randomness, we might consider optimizing our technique for computing the future epoch's randomness too.

At present, we concatenate all make_bytes results from BABE VRFs in

fn compute_randomness(
Is it expensive to reference so many pasty blocks in
(0..segment_idx).flat_map(|i| <UnderConstruction>::take(&i)),
?

If so, you could track some pre_next_randomness that block authors attack add to their block header and iterate like pre_next_randomness = H(pre_next_randomness ++ vrf_io.make_bytes()). We maybe rejected that because it increases the block header size, but not sure anymore.

I've made this comment here because we might reject doing that change in BABE just for simplicity but consider this more fully here.

Anyway the question is: How expensive is iterating over old block headers? How expensive is an extra 32 bytes in a block header?

@gnunicorn
Copy link
Contributor

closing because of inactivity.

@gnunicorn gnunicorn closed this Sep 9, 2020
@burdges
Copy link

burdges commented Sep 9, 2020

We'll adopt this eventually but it just got stuck behind higher priority parachains work.

We should update this to #7053 too whenever it gets done.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A3-in_progress Pull request is in progress. No review needed at this stage.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants