You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now all the benchmark stats are collected from consensus in SequencerContext, then each node reports to orchestrator and orchestrator output benchmarks after doing some avg calculation.
However, it makes more sense to calculate overall stats (like latency and throughput) from the API by subscribing to streams like availability/stream/blocks/{}.
This issue is created for such a switching, it's also a followup on #1628.
Sub-issue tracker for each stat:
total_nodes
da_committee_size
transactions_per_round
transaction_size
rounds_start
rounds_end
avg_latency_in_sec
minimum_latency_in_sec
maximum_latency_in_sec
avg_throughput
total_time_elapsed_in_sec
The text was updated successfully, but these errors were encountered:
Closes#1628#1695
<!-- These comments should help create a useful PR message, please
delete any remaining comments before opening the PR. -->
<!-- If there is no issue number make sure to describe clearly *why*
this PR is necessary. -->
<!-- Mention open questions, remaining TODOs, if any -->
### This PR:
- makes benchmarks runnable with sequencer locally
- have the metrics of each run saved into a file
<!-- Describe what this PR adds to this repo and why -->
<!-- E.g. -->
<!-- * Implements feature 1 -->
<!-- * Fixes bug 3 -->
### This PR does not:
- parameterize benchmark parameter for sequencer, especially like
`start_round` and `end_round`, they're hard-coded now. This will be
designed later.
<!-- Describe what is out of scope for this PR, if applicable. Leave
this section blank if it's not applicable -->
<!-- This section helps avoid the reviewer having to needlessly point
out missing parts -->
<!-- * Implement feature 3 because that feature is blocked by Issue 4
-->
<!-- * Implement xyz because that is tracked in issue #123. -->
<!-- * Address xzy for which I opened issue #456 -->
### Key places to review:
<!-- Describe key places for reviewers to pay close attention to -->
<!-- * file.rs, `add_integers` function -->
<!-- Or directly comment on those files/lines to make it easier for the
reviewers -->
### How to test this PR:
Create `results.csv` under `scripts/benchmarks_results`, then run `just
demo-native-benchmark` to test it.
<!-- Optional, uncomment the above line if this is relevant to your PR
-->
<!-- If your PR is fully tested through CI there is no need to add this
section -->
<!-- * E.g. `just test` -->
<!-- ### Things tested -->
<!-- Anything that was manually tested (that is not tested in CI). -->
<!-- E.g. building/running of docker containers. Changes to docker demo,
... -->
<!-- Especially mention anything untested, with reasoning and link an
issue to resolve this. -->
<!-- Complete the following items before creating this PR -->
<!-- [ ] Issue linked or PR description mentions why this change is
necessary. -->
<!-- [ ] PR description is clear enough for reviewers. -->
<!-- [ ] Documentation for changes (additions) has been updated (added).
-->
<!-- [ ] If this is a draft it is marked as "draft". -->
<!-- To make changes to this template edit
https://github.com/EspressoSystems/.github/blob/main/PULL_REQUEST_TEMPLATE.md
-->
Now all the benchmark stats are collected from consensus in SequencerContext, then each node reports to orchestrator and orchestrator output benchmarks after doing some avg calculation.
However, it makes more sense to calculate overall stats (like latency and throughput) from the API by subscribing to streams like
availability/stream/blocks/{}
.This issue is created for such a switching, it's also a followup on #1628.
Sub-issue tracker for each stat:
The text was updated successfully, but these errors were encountered: