Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Prio3FixedPoint64BitBoundedL2VecSum #1658

Closed
divergentdave opened this issue Jul 31, 2023 · 1 comment · Fixed by #2122
Closed

Remove Prio3FixedPoint64BitBoundedL2VecSum #1658

divergentdave opened this issue Jul 31, 2023 · 1 comment · Fixed by #2122

Comments

@divergentdave
Copy link
Collaborator

The instantiation of the fixed point vector VDAF with 64-bit fixed point numbers is of limited use, because it can only handle one- two-, or three-dimensional vectors.

failed to construct Prio3FixedPoint64BitBoundedL2VecSum VDAF

Caused by:
    0: flp error: value error: number of entries (4) not compatible with field size
    1: value error: number of entries (4) not compatible with field size

This is because the vector coordinates each get encoded in 63 bits, so the square of the vector's L2-norm gets encoded in 126 bits. Adding four squared coordinates together would bring you to $\approx 2^{128}$, which would exceed the prime modulus of Field128.

Given that the intended use case of this VDAF is federated learning, where low dimensionality and 64-bit fixed point precision are uncommon, I think we should remove this VDAF, leaving just the 16-bit and 32-bit versions. If need be, we can always add another specialization at, say, 48 bits down the line.

cc @ooovi @MxmUrw

@MxmUrw
Copy link
Contributor

MxmUrw commented Aug 2, 2023

Yes, fair enough. I think we should do so after #1440, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants