-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add hidden configuration for sending batched messages sequentially #2543
Conversation
Integration tests will be added before merging is ready. |
} | ||
|
||
Ok(tx_sync_results) | ||
} | ||
|
||
async fn sequential_send_messages_as_batches( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned about more code duplication here.
When contemplating refactoring for batching fixes respecting actual tx size, already there are two functions to care about, this adds a third one.
Could this be incorporated in the regular send_messages_as_batches
under a dynamic flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The core logic has already been abstracted into helper functions like batch_messages
, send_tx_with_account_sequence_retry
, and wait_for_block_commits
. So functions like sequential_send_messages_as_batches
only act as higher level functions that wire up the lower level functions. Such low level functions are already there to serve the code deduplication purpose.
Using a dynamic flag can obscure the logic that differentiates send_messages_as_batches
from sequential_send_messages_as_batches
. With them being separate functions, it is much more clear of how the behavior are different from one another.
|
||
tx_sync_results.push(tx_sync_result); | ||
|
||
wait_for_block_commits( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With sequential sending, we get the DeliverTX result before sending the next transaction. However it is a bit unclear here whether we should continue submitting the remaining batches to the chain, if DeliverTX or SendTX returns error.
Supposingly, we should probably short circuit and return on the first encounter of error. However I keep the behavior the same as the parallel version, so that we do not observe unexpected difference in behavior when sequential batching is enabled.
The anti-pattern behavior of converting errors to duplicate IBC error events also significantly complicates the design space here. So I'd rather leave the behavior untouched until the chain errors are handled in better ways.
b1aecdb
to
4575ecd
Compare
…nformalsystems#2543) Partial fix for informalsystems#2350 This PR adds a hidden chain configuration `sequential_batch_tx`, which will cause the relayer to submit batched messages in serial after the previous transaction is committed to the chain. The config has default value `false`, which disables sequential sending by default. However we can instruct relayer operators to set this to `true`, in the rare case that the batched transactions are failing due to priority mempool being turned on by validators of a chain. Note that enabling sequential sending may cause degradation in performance. Hence we do not expose this config in the public documentation, to prevent relayer operators from shooting themselves in the foot. --- * Add sequential version of send_batched_messages * Add sequential_batch_tx chain config * Manual test behavior of integration tests with sequential_batch_tx * Add draft test for sequential batching * Add set_mempool_version to enable priority mempool * Add integration test for sequential batching * Relax time assertion for parallel batch * Add some documentation to send_messages functions Co-authored-by: Romain Ruetschi <romain@informal.systems>
Partial fix for #2350
Description
This PR adds a hidden chain configuration
sequential_batch_tx
, which will cause the relayer to submit batched messages in serial after the previous transaction is committed to the chain.The config has default value
false
, which disables sequential sending by default. However we can instruct relayer operators to set this totrue
, in the rare case that the batched transactions are failing due to priority mempool being turned on by validators of a chain.Note that enabling sequential sending may cause degradation in performance. Hence we do not expose this config in the public documentation, to prevent relayer operators from shooting themselves in the foot.
PR author checklist:
unclog
.docs/
).Reviewer checklist:
Files changed
in the GitHub PR explorer.