You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The instantiation of the fixed point vector VDAF with 64-bit fixed point numbers is of limited use, because it can only handle one- two-, or three-dimensional vectors.
failed to construct Prio3FixedPoint64BitBoundedL2VecSum VDAF
Caused by:
0: flp error: value error: number of entries (4) not compatible with field size
1: value error: number of entries (4) not compatible with field size
This is because the vector coordinates each get encoded in 63 bits, so the square of the vector's L2-norm gets encoded in 126 bits. Adding four squared coordinates together would bring you to $\approx 2^{128}$, which would exceed the prime modulus of Field128.
Given that the intended use case of this VDAF is federated learning, where low dimensionality and 64-bit fixed point precision are uncommon, I think we should remove this VDAF, leaving just the 16-bit and 32-bit versions. If need be, we can always add another specialization at, say, 48 bits down the line.
The instantiation of the fixed point vector VDAF with 64-bit fixed point numbers is of limited use, because it can only handle one- two-, or three-dimensional vectors.
This is because the vector coordinates each get encoded in 63 bits, so the square of the vector's L2-norm gets encoded in 126 bits. Adding four squared coordinates together would bring you to$\approx 2^{128}$ , which would exceed the prime modulus of
Field128
.Given that the intended use case of this VDAF is federated learning, where low dimensionality and 64-bit fixed point precision are uncommon, I think we should remove this VDAF, leaving just the 16-bit and 32-bit versions. If need be, we can always add another specialization at, say, 48 bits down the line.
cc @ooovi @MxmUrw
The text was updated successfully, but these errors were encountered: