Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Batch Publishing For Applications #602

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

nisdas
Copy link
Contributor

@nisdas nisdas commented Mar 12, 2025

For a more detailed reasoning on the motivation for this:
https://ethresear.ch/t/improving-das-performance-with-gossipsub-batch-publishing/21713

This pull request adds in a new publishing option that allows messages to be published as a batch instead of being queued and sent out individually. Why would this be desired ? Applications can send out multiple messages at the same time. In the event a node has constrained upload bandwidth and these messages are large, the router would take up a longer than desired time sending out D copies to all mesh peers for the first message published. This can add a non-trivial amount of latency for the other messages being propagated to the rest of the network.

This option allows these messages and their copies to be shuffled randomly and then queued in a random order and sent out to their respective peers. This allows for the first copy of any message in the batch to be propagated to the rest of the network much faster. With a first copy being sent out quicker, this leads to earlier message arrival time for the whole network.

  • Add Batch Publishing Option.
  • Add Batch Message Object to track all the desired message ids and the rpc messages to be published to individual peers.
  • Allow Publishing of the whole batch once it is complete.
  • Add Tests that the feature works as expected.

@vyzo
Copy link
Collaborator

vyzo commented Mar 12, 2025

I think the latency would actually increase, as the whole rpc message has tobe received and decoded before any message is processed, no?

@nisdas
Copy link
Contributor Author

nisdas commented Mar 12, 2025

I think the latency would actually increase, as the whole rpc message has tobe received and decoded before any message is processed, no?

Maybe I am missing something, why would that be the case ? This PR just allows you to withhold sending the rpc message to your mesh/direct peers till all the batch message ids have been processed by the gossip router. We then just shuffle the rpcs and send them out so that each message in the batch is sent out 'fairly'

@vyzo
Copy link
Collaborator

vyzo commented Mar 12, 2025

ah you are not sending as one rpc, so it should be ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants