Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow big protocol messages #495

Merged
merged 6 commits into from
Nov 13, 2023
Merged

Allow big protocol messages #495

merged 6 commits into from
Nov 13, 2023

Conversation

ameba23
Copy link
Contributor

@ameba23 ameba23 commented Nov 13, 2023

This addresses #491 by allowing protocol messages (signing/DKG/proactive refresh) which are bigger than the maximum messages size for the noise protocol (64kb).

This works by splitting the incoming / outgoing messages into 64kb (or less) chunks, encrypting or decrypting them, and then concatenating the results into a single message.

The maximum message size is now the maximum size allowed by Websockets, which will not be an issue for us (its 2^64 bytes).

Copy link

vercel bot commented Nov 13, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
entropy-core ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 13, 2023 9:41am

Copy link
Collaborator

@HCastano HCastano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, thanks!

while i < ciphertext.len() {
let read_to = min(i + MAX_NOISE_MESSAGE_SIZE, ciphertext.len());
let len = self.noise_transport.read_message(&ciphertext[i..read_to], &mut self.buf)?;
full_message.extend_from_slice(&self.buf[..len]);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This clones every element in the buffer that's passed in, do you think there can be any (eventual) problems related to high memory usage from this? Or does it not matter with the message sizes involved?

I'm not familiar with what we're sending over the wire, so that's why I'm asking

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. I wanted to avoid this and read directly into an array, but that means knowing the size of the array at compile time. We can calculate how big it needs to be, but rust wont let us allocate it at runtime.

So unless we can decide on a new maximum message size, i can't see how to avoid cloning here.

Currently with test parameters the biggest message i have seen was about 170kb, but with production params they'll be bigger.

Maybe its worth making an issue and revisiting this when synedrion gets more stable and we can come up with a reasonable maximum message size.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah cool, let's open an issue then. As part of that let's also make a note that we should have some sort of metrics in place to track messages sizes

@ameba23 ameba23 merged commit 4cce26f into master Nov 13, 2023
5 checks passed
@ameba23 ameba23 deleted the peg/big-protocol-messages branch November 13, 2023 19:49
ameba23 added a commit that referenced this pull request Nov 14, 2023
* master:
  Integration test for validator joining network (#493)
  Add `ENTROPY_NODE` environment variable (#497)
  Change `SocketAddr` type for `String` (#496)
  Allow big protocol messages (#495)
  Add test for `EncryptedConnection` (#494)
  Use bincode rather than JSON for protocol and subscribe messages (#492)
  Remark on possible need to address TSS server from the Docker host. (#490)
  no proactive refresh on private key visibility (#485)
  Auxiliary data for program evaluation (#475)
  Validate proactive refresh endpoint (#483)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants