Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LogUp-based Virtual Bus #1307

Merged
merged 38 commits into from
Jun 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
44a031e
chore: fix no-std errors
Al-Kindi-0 Mar 22, 2024
fa5bd2e
merge next
Al-Kindi-0 Mar 28, 2024
48116ad
Merge branch 'next' of https://github.com/0xPolygonMiden/miden-vm int…
Al-Kindi-0 Apr 29, 2024
b2b2de5
Merge branch 'next' of https://github.com/0xPolygonMiden/miden-vm int…
Al-Kindi-0 May 1, 2024
e2ce938
Merge branch 'next' of https://github.com/0xPolygonMiden/miden-vm int…
Al-Kindi-0 May 10, 2024
0bbb515
Merge branch 'next' of https://github.com/0xPolygonMiden/miden-vm int…
Al-Kindi-0 May 24, 2024
7078870
feat: migrate to new Winterfell release
Al-Kindi-0 May 10, 2024
df9b110
chore: fix no-std errors
Al-Kindi-0 Mar 22, 2024
699cb11
feat: sum-check protocol
Al-Kindi-0 Mar 28, 2024
3b2acf5
fix: no-std
Al-Kindi-0 Mar 28, 2024
431b937
chore: fmt
Al-Kindi-0 Mar 28, 2024
2686d65
fix: remove challenger marker
Al-Kindi-0 Mar 28, 2024
de541c5
chore: add evaluation point to final eval claim output of prover
Al-Kindi-0 Mar 28, 2024
5110915
chore: fix typos and comments
Al-Kindi-0 Apr 5, 2024
7537be3
chore: nits and renaming
Al-Kindi-0 Apr 5, 2024
87e3c49
refactor: remove Option for FinalOpeningClaim
Al-Kindi-0 Apr 5, 2024
720ac33
chore: namings comments and misc.
Al-Kindi-0 Apr 10, 2024
cea681a
wip: move to coefficient form
Al-Kindi-0 Apr 17, 2024
0fed9cd
feat: move sum-check round polys to coefficient form representation
Al-Kindi-0 Apr 17, 2024
cb503d1
fix: add file
Al-Kindi-0 Apr 17, 2024
5b734d1
wip: GKR protocol for virtual bus
Al-Kindi-0 Apr 4, 2024
fa97e9e
chore: add accessors and re-org
Al-Kindi-0 Apr 5, 2024
63243a8
chore: fix merge conflicts
Al-Kindi-0 Apr 5, 2024
a275df4
chore: re-shuffle
Al-Kindi-0 Apr 5, 2024
d6cd197
chore: add vb errors and failing test
Al-Kindi-0 Apr 7, 2024
016bc95
fix: update trait def after rebase
Al-Kindi-0 Apr 18, 2024
f46ac0c
fix: use last_mut
Al-Kindi-0 Apr 22, 2024
31609c5
fix: address some comments
Al-Kindi-0 Apr 25, 2024
7567a10
Change suggestions to GKR implementation (#1323)
plafer May 3, 2024
37af90b
chore: rebase
Al-Kindi-0 May 21, 2024
456ebec
chore: pacify warning
Al-Kindi-0 May 21, 2024
173e26a
chore: change sumcheck_round signature
Al-Kindi-0 May 24, 2024
4206910
chore: remove to_owned
Al-Kindi-0 May 24, 2024
09deed7
GKR: Refactor circuit evaluation (#1345)
plafer Jun 9, 2024
8c1842b
Merge branch 'next' into al-gkr-circuit
plafer Jun 16, 2024
8b7b5c6
remove unused alloc
plafer Jun 16, 2024
0dca8f2
docs: minor comment improvements and typo fixes
bobbinth Jun 23, 2024
ddd7b50
refactor: move CompositionPoly from circuit to sumcheck
bobbinth Jun 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions air/src/trace/main_trace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,6 @@ use super::{
use core::ops::{Deref, Range};
use vm_core::{utils::range, Felt, Word, ONE, ZERO};

#[cfg(any(test, feature = "internals"))]
use alloc::vec::Vec;

// CONSTANTS
// ================================================================================================

Expand All @@ -43,6 +40,13 @@ impl Deref for MainTrace {
}
}

#[cfg(any(test, feature = "internals"))]
impl core::ops::DerefMut for MainTrace {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.columns
}
}

impl MainTrace {
pub fn new(main_trace: ColMatrix<Felt>) -> Self {
Self {
Expand All @@ -55,7 +59,7 @@ impl MainTrace {
}

#[cfg(any(test, feature = "internals"))]
pub fn get_column_range(&self, range: Range<usize>) -> Vec<Vec<Felt>> {
pub fn get_column_range(&self, range: Range<usize>) -> alloc::vec::Vec<alloc::vec::Vec<Felt>> {
range.fold(vec![], |mut acc, col_idx| {
acc.push(self.get_column(col_idx).to_vec());
acc
Expand Down
4 changes: 3 additions & 1 deletion processor/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,10 @@ std = ["vm-core/std", "winter-prover/std"]
tracing = { version = "0.1", default-features = false, features = [
"attributes",
] }
vm-core = { package = "miden-core", path = "../core", version = "0.9", default-features = false }
miden-air = { package = "miden-air", path = "../air", version = "0.9", default-features = false }
static_assertions = "1.1.0"
thiserror = { version = "1.0", default-features = false }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we using this somewhere? I think standard thiserror requires stdblib and we should probably use the one forked by @bitwalker as we do in the assembler. Though, we should probably move it into this org - probably as miden-thiserror.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we should use the fork anywhere in this workspace that we depend on thiserror, at least temporarily - the error_in_core feature is stabilized in 1.80, and I expect that the mainline thiserror will support no-std shortly thereafter. With that in mind, it probably is not worth moving the thiserror fork to our org for such a short time. That said, I'll double check and see what the plan is though for thiserror, and if it looks like it'll be a month or two, then moving it makes more sense (I don't actually care whether we move it now or not, but didn't want to overcomplicate things for what amounts to a temporary workaround).

We probably should move the miette fork to our org though, since that will likely stick around for awhile longer, even once the thiserror fork goes away. I can try to find some time to do that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the error_in_core feature is stabilized in 1.80

Ah - I didn't realize it is already slated for 1.80. In this case, no need to create miden-thiserror - we can just rely on your fork until the canonical thiserror is updated.

vm-core = { package = "miden-core", path = "../core", version = "0.9", default-features = false }
winter-prover = { package = "winter-prover", version = "0.9", default-features = false }

[dev-dependencies]
Expand Down
5 changes: 4 additions & 1 deletion processor/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,10 @@ use chiplets::Chiplets;

mod trace;
use trace::TraceFragment;
pub use trace::{ChipletsLengths, ExecutionTrace, TraceLenSummary, NUM_RAND_ROWS};
pub use trace::{
prove_virtual_bus, verify_virtual_bus, ChipletsLengths, ExecutionTrace, TraceLenSummary,
NUM_RAND_ROWS,
};

mod errors;
pub use errors::{ExecutionError, Ext2InttError};
Expand Down
3 changes: 3 additions & 0 deletions processor/src/trace/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ use winter_prover::{crypto::RandomCoin, EvaluationFrame, Trace, TraceInfo};
mod utils;
pub use utils::{AuxColumnBuilder, ChipletsLengths, TraceFragment, TraceLenSummary};

mod virtual_bus;
pub use virtual_bus::{prove as prove_virtual_bus, verify as verify_virtual_bus};

#[cfg(test)]
mod tests;
#[cfg(test)]
Expand Down
24 changes: 24 additions & 0 deletions processor/src/trace/virtual_bus/circuit/error.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
use crate::trace::virtual_bus::sum_check::SumCheckProverError;
use crate::trace::virtual_bus::sum_check::SumCheckVerifierError;

#[derive(Debug, thiserror::Error)]
pub enum ProverError {
#[error("failed to generate multi-linear from the given evaluations")]
FailedToGenerateML,
#[error("failed to generate the sum-check proof")]
FailedToProveSumCheck(#[from] SumCheckProverError),
#[error("failed to generate the random challenge")]
FailedToGenerateChallenge,
}

#[derive(Debug, thiserror::Error)]
pub enum VerifierError {
#[error("one of the claimed circuit denominators is zero")]
ZeroOutputDenominator,
#[error("the output of the fraction circuit is not equal to the expected value")]
MismatchingCircuitOutput,
#[error("failed to generate the random challenge")]
FailedToGenerateChallenge,
#[error("failed to verify the sum-check proof")]
FailedToVerifySumCheck(#[from] SumCheckVerifierError),
}
290 changes: 290 additions & 0 deletions processor/src/trace/virtual_bus/circuit/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,290 @@
use core::ops::Add;

use crate::trace::virtual_bus::multilinear::EqFunction;
use crate::trace::virtual_bus::sum_check::{CompositionPolynomial, RoundProof};
use alloc::vec::Vec;
use miden_air::trace::chiplets::{MEMORY_D0_COL_IDX, MEMORY_D1_COL_IDX};
use miden_air::trace::decoder::{DECODER_OP_BITS_OFFSET, DECODER_USER_OP_HELPERS_OFFSET};
use miden_air::trace::range::{M_COL_IDX, V_COL_IDX};
use miden_air::trace::{CHIPLETS_OFFSET, TRACE_WIDTH};
use prover::CircuitLayerPolys;
use static_assertions::const_assert;
use vm_core::{Felt, FieldElement};

mod error;
mod prover;
pub use prover::prove;

mod verifier;
pub use verifier::verify;

use super::multilinear::MultiLinearPoly;
use super::sum_check::{FinalOpeningClaim, Proof as SumCheckProof};

/// Defines the number of wires in the input layer that are generated from a single main trace row.
const NUM_WIRES_PER_TRACE_ROW: usize = 8;
const_assert!(NUM_WIRES_PER_TRACE_ROW.is_power_of_two());

// CIRCUIT WIRE
// ================================================================================================

/// Represents a fraction `numerator / denominator` as a pair `(numerator, denominator)`. This is
/// the type for the gates' inputs in [`prover::EvaluatedCircuit`].
///
/// Hence, addition is defined in the natural way fractions are added together: `a/b + c/d = (ad +
/// bc) / bd`.
#[derive(Debug, Clone, Copy)]
pub struct CircuitWire<E: FieldElement> {
numerator: E,
denominator: E,
}

impl<E> CircuitWire<E>
where
E: FieldElement,
{
/// Creates new projective coordinates from a numerator and a denominator.
pub fn new(numerator: E, denominator: E) -> Self {
assert_ne!(denominator, E::ZERO);

Self {
numerator,
denominator,
}
}
}

impl<E> Add for CircuitWire<E>
where
E: FieldElement,
{
type Output = Self;

fn add(self, other: Self) -> Self {
let numerator = self.numerator * other.denominator + other.numerator * self.denominator;
let denominator = self.denominator * other.denominator;

Self::new(numerator, denominator)
}
}

/// Converts a main trace row (or more generally "query") to numerators and denominators of the
/// input layer.
fn evaluate_fractions_at_main_trace_query<E>(
query: &[E],
log_up_randomness: &[E],
) -> [[E; NUM_WIRES_PER_TRACE_ROW]; 2]
where
E: FieldElement,
{
// numerators
let multiplicity = query[M_COL_IDX];
let f_m = {
let mem_selec0 = query[CHIPLETS_OFFSET];
let mem_selec1 = query[CHIPLETS_OFFSET + 1];
let mem_selec2 = query[CHIPLETS_OFFSET + 2];
mem_selec0 * mem_selec1 * (E::ONE - mem_selec2)
};

let f_rc = {
let op_bit_4 = query[DECODER_OP_BITS_OFFSET + 4];
let op_bit_5 = query[DECODER_OP_BITS_OFFSET + 5];
let op_bit_6 = query[DECODER_OP_BITS_OFFSET + 6];

(E::ONE - op_bit_4) * (E::ONE - op_bit_5) * op_bit_6
};

// denominators
let alphas = log_up_randomness;

let table_denom = alphas[0] - query[V_COL_IDX];
let memory_denom_0 = -(alphas[0] - query[MEMORY_D0_COL_IDX]);
let memory_denom_1 = -(alphas[0] - query[MEMORY_D1_COL_IDX]);
let stack_value_denom_0 = -(alphas[0] - query[DECODER_USER_OP_HELPERS_OFFSET]);
let stack_value_denom_1 = -(alphas[0] - query[DECODER_USER_OP_HELPERS_OFFSET + 1]);
let stack_value_denom_2 = -(alphas[0] - query[DECODER_USER_OP_HELPERS_OFFSET + 2]);
let stack_value_denom_3 = -(alphas[0] - query[DECODER_USER_OP_HELPERS_OFFSET + 3]);

[
[multiplicity, f_m, f_m, f_rc, f_rc, f_rc, f_rc, E::ZERO],
[
table_denom,
memory_denom_0,
memory_denom_1,
stack_value_denom_0,
stack_value_denom_1,
stack_value_denom_2,
stack_value_denom_3,
E::ONE,
],
]
}
Comment on lines +71 to +121
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this basically the only place where we have logic specific to Miden VM? Not for this PR, but I'm wondering if everything else is generic logic that could be a part of Winterfell rather than Miden VM. For example, we could introduce a method on the Air trait that would basically do the job of this function and look something like this:

fn build_gkr_input_layer<F, E>(&self, row: &[F], logup_rands: &[E) -> [Vec<E>; 2]
where
  F: FieldElement<BaseField = Self::BaseField>,
  E: FieldElement<BaseField = Self::BaseField> + ExtensionOf<F>,

And then both GKR prover and verifier would be able to use this to build and verify the proof respectively.

A couple of other questions:

  • What is log_up_randomness here? Is it a set of random values generated from the main trace commitment root? Are these the "lambdas" referenced in this issue or is it something else?
    • I'm assuming we are using only the first element from log_up_randomness because of the nature of the constraints. If we had more complicated tables (e.g., where multiple columns needed to be merged into a single value per row), we'd need to use more random elements - right?
  • Where would we specify "boundary" constraints for the buses? Specifically, for sub-busses which do not start and/or end with 0, what would be the way to specify initial/final values?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that makes sense. This will require however that Winterfell is able to handle the logic for building the s-column similar to how it does for the l-column (i.e., Lagrange column).

To this end we would need to introduce the concept of an oracle. This is basically a wrapper around CompositionPolynomial with information about the index of the columns appearing in the algebraic expression encoded in the CompositionPolynomial as well as additional information needed to verify the final evaluation claim e.g., whether a column is periodic (or more generally transparent) and thus can be computed by the verifier un-assisted and hence does not need to be included in the s-column, whether a column is a shift as this is important when building the s-column...

  • What is log_up_randomness here? Is it a set of random values generated from the main trace commitment root?

Yes, these are the randomness generated from the main trace commitment, usually called the $\alpha$'s and indeed if we had more column to merge we would use more than one.

Where would we specify "boundary" constraints for the buses? Specifically, for sub-busses which do not start and/or end with 0, what would be the way to specify initial/final values?

The verifier will compute all of the boundary values in order to get a single claim value. The verifier will then check that the output of the LogUp circuit equals that claim. More precisely this would be done here

pub fn verify<
    E: FieldElement<BaseField = Felt>,
    C: RandomCoin<Hasher = H, BaseField = Felt>,
    H: ElementHasher<BaseField = Felt>,
>(
    claim: E,
    proof: GkrCircuitProof<E>,
    log_up_randomness: Vec<E>,
    transcript: &mut C,
) -> Result<FinalOpeningClaim<E>, VerifierError> {
...
// check that the output matches the expected `claim`
    if (p0 * q1 + p1 * q0) / (q0 * q1) != claim {
        return Err(VerifierError::MismatchingCircuitOutput);
    }
...
}

At the moment, claim is set to zero but if we want to go with the proposal of delegating everything to Winterfell, then we would need to add a method to the GKR verifier so that it can compute claim from the public inputs.


/// Computes the wires added to the input layer that come from a given main trace row (or more
/// generally, "query").
fn compute_input_layer_wires_at_main_trace_query<E>(
query: &[E],
log_up_randomness: &[E],
) -> [CircuitWire<E>; NUM_WIRES_PER_TRACE_ROW]
where
E: FieldElement,
{
let [numerators, denominators] =
evaluate_fractions_at_main_trace_query(query, log_up_randomness);
let input_gates_values: Vec<CircuitWire<E>> = numerators
.into_iter()
.zip(denominators)
.map(|(numerator, denominator)| CircuitWire::new(numerator, denominator))
.collect();
input_gates_values.try_into().unwrap()
}

/// A GKR proof for the correct evaluation of the sum of fractions circuit.
#[derive(Debug)]
pub struct GkrCircuitProof<E: FieldElement> {
circuit_outputs: CircuitLayerPolys<E>,
before_final_layer_proofs: BeforeFinalLayerProof<E>,
final_layer_proof: FinalLayerProof<E>,
}
Comment on lines +142 to +148
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: how big can we expect the proof to be (e.g., in KB) and what are the primary drivers? I'm guessing the primary driver is the complexity of the bus constraints - but does trace length also make a big difference?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proof is composed of two parts:

  1. The round polynomials: The number of these is dependent on the trace length as well as the complexity of the bus. The degree of these is dependent on the max selector degree.
  2. The opening points: The number of this dependent on the trace length and the complexity of the bus.

To give a concrete formula, suppose that the trace length is $2^{\nu}$ and the number of fractions defining the bus is $2^{\mu + 1}$, then:

  1. The number of field elements defining the round polynomials is given by $$\frac{3\cdot(\mu + \nu - 1)\cdot(\mu + \nu)}{2} + 3\cdot\mu + d\cdot\nu,$$ the $3$ represents the degree of the round polynomials in the sum-checks appearing in all but the final sum-check, while the $d$ represent the degree of the round polynomial for the final sum-check (which depends as stated above on the degree of the selectors).
  2. The number of opening points is given by $$4\cdot (\mu + \nu - 1) + k,$$ where $k$ is the number of trace columns, as well as their shifts, appearing in the bus.

Note that all elements in the above are extension field elements.

To give some concrete numbers, suppose that $\nu = 20$, $\mu = 5$ and $k = 5$ (max selector degree is 3), then:

  1. The proof size due to round polynomials is: 1015 extension field elements which translates in the case of a quadratic extension degree to about 16Kb.
  2. The proof size due to opening points is: 136 extension field elements assuming we open 40 columns and column-shifts. This translates in the case of a quadratic extension degree to about 2Kb.


impl<E: FieldElement> GkrCircuitProof<E> {
pub fn get_final_opening_claim(&self) -> FinalOpeningClaim<E> {
self.final_layer_proof.after_merge_proof.openings_claim.clone()
}
}

/// A set of sum-check proofs for all GKR layers but for the input circuit layer.
#[derive(Debug)]
pub struct BeforeFinalLayerProof<E: FieldElement> {
pub proof: Vec<SumCheckProof<E>>,
}

/// A proof for the input circuit layer i.e., the final layer in the GKR protocol.
#[derive(Debug)]
pub struct FinalLayerProof<E: FieldElement> {
before_merge_proof: Vec<RoundProof<E>>,
after_merge_proof: SumCheckProof<E>,
}

/// Represents a claim to be proven by a subsequent call to the sum-check protocol.
#[derive(Debug)]
pub struct GkrClaim<E: FieldElement> {
pub evaluation_point: Vec<E>,
pub claimed_evaluation: (E, E),
}

/// A composition polynomial used in the GKR protocol for all of its sum-checks except the final
/// one.
#[derive(Clone)]
pub struct GkrComposition<E>
where
E: FieldElement<BaseField = Felt>,
{
pub combining_randomness: E,
}

impl<E> GkrComposition<E>
where
E: FieldElement<BaseField = Felt>,
{
pub fn new(combining_randomness: E) -> Self {
Self {
combining_randomness,
}
}
}

impl<E> CompositionPolynomial<E> for GkrComposition<E>
where
E: FieldElement<BaseField = Felt>,
{
fn max_degree(&self) -> u32 {
3
}

fn evaluate(&self, query: &[E]) -> E {
let eval_left_numerator = query[0];
let eval_right_numerator = query[1];
let eval_left_denominator = query[2];
let eval_right_denominator = query[3];
let eq_eval = query[4];
eq_eval
* ((eval_left_numerator * eval_right_denominator
+ eval_right_numerator * eval_left_denominator)
+ eval_left_denominator * eval_right_denominator * self.combining_randomness)
}
}

/// A composition polynomial used in the GKR protocol for its final sum-check.
#[derive(Clone)]
pub struct GkrCompositionMerge<E>
where
E: FieldElement<BaseField = Felt>,
{
pub sum_check_combining_randomness: E,
pub tensored_merge_randomness: Vec<E>,
pub log_up_randomness: Vec<E>,
}

impl<E> GkrCompositionMerge<E>
where
E: FieldElement<BaseField = Felt>,
{
pub fn new(
combining_randomness: E,
merge_randomness: Vec<E>,
log_up_randomness: Vec<E>,
) -> Self {
let tensored_merge_randomness =
EqFunction::ml_at(merge_randomness.clone()).evaluations().to_vec();

Self {
sum_check_combining_randomness: combining_randomness,
tensored_merge_randomness,
log_up_randomness,
}
}
}

impl<E> CompositionPolynomial<E> for GkrCompositionMerge<E>
where
E: FieldElement<BaseField = Felt>,
{
fn max_degree(&self) -> u32 {
// Computed as:
// 1 + max(left_numerator_degree + right_denom_degree, right_numerator_degree +
// left_denom_degree)
5
}

fn evaluate(&self, query: &[E]) -> E {
let [numerators, denominators] =
evaluate_fractions_at_main_trace_query(query, &self.log_up_randomness);

let numerators = MultiLinearPoly::from_evaluations(numerators.to_vec()).unwrap();
let denominators = MultiLinearPoly::from_evaluations(denominators.to_vec()).unwrap();

let (left_numerators, right_numerators) = numerators.project_least_significant_variable();
let (left_denominators, right_denominators) =
denominators.project_least_significant_variable();

let eval_left_numerators =
left_numerators.evaluate_with_lagrange_kernel(&self.tensored_merge_randomness);
let eval_right_numerators =
right_numerators.evaluate_with_lagrange_kernel(&self.tensored_merge_randomness);

let eval_left_denominators =
left_denominators.evaluate_with_lagrange_kernel(&self.tensored_merge_randomness);
let eval_right_denominators =
right_denominators.evaluate_with_lagrange_kernel(&self.tensored_merge_randomness);

let eq_eval = query[TRACE_WIDTH];

eq_eval
* ((eval_left_numerators * eval_right_denominators
+ eval_right_numerators * eval_left_denominators)
+ eval_left_denominators
* eval_right_denominators
* self.sum_check_combining_randomness)
}
}
Loading
Loading