Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running benchmarks with sequencer locally #1629

Merged
merged 31 commits into from
Jul 17, 2024
Merged

Running benchmarks with sequencer locally #1629

merged 31 commits into from
Jul 17, 2024

Conversation

dailinsubjam
Copy link
Contributor

@dailinsubjam dailinsubjam commented Jun 20, 2024

Closes #1628 #1695

This PR:

  • makes benchmarks runnable with sequencer locally
  • have the metrics of each run saved into a file

This PR does not:

  • parameterize benchmark parameter for sequencer, especially like start_round and end_round, they're hard-coded now. This will be designed later.

Key places to review:

How to test this PR:

Create results.csv under scripts/benchmarks_results, then run just demo-native-benchmark to test it.

@dailinsubjam dailinsubjam marked this pull request as draft June 20, 2024 09:50
@dailinsubjam dailinsubjam changed the title [DRAFT] Running benchmarks with sequencer locally Jun 20, 2024
@dailinsubjam dailinsubjam marked this pull request as ready for review June 27, 2024 06:24
Copy link
Contributor

@tbro tbro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments. Some are just for my own understanding.

loop {
match event_stream.next().await {
None => {
panic!("Error! Event stream completed before consensus ended.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems strange that benchmarking logic would add panics. Should these errors be logged instead?

}
tracing::warn!("starting consensus");
self.handle.read().await.hotshot.start_consensus().await;

#[cfg(feature = "benchmarking")]
if has_orchestrator_client {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bench-marking logic is adding lot of complexity, can we hide it in a function or method? I'm not sure of the best strategy, but maybe it could just be another method on SequencerContext that we call from here...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 Yeah I'm also thinking about that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could just add a benchmark() method, gated by benchmarking feature to this same impl. Then just call that method on line 267 (instead of has_orchestrator_client = true). Then you wouldn't need the has_orchestrator_client variable or the following if statement that uses it.

Copy link
Contributor Author

@dailinsubjam dailinsubjam Jul 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the restructure (motivation here #1695), I move all the benchmarking logic to submit-transactions.rs, since we already calculated latency there. So now benchmarking doesn't have its own function but it's together with the calculation of latency after subscribing to availability/stream/blocks/{}.

Some(Event { event, .. }) => {
match event {
EventType::Error { error } => {
tracing::error!("Error in consensus: {:?}", error);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how this is handled elsewhere, but would it be useful for introspection to distinquish benchmarking log events w/ a specific prefix?

sequencer/src/context.rs Outdated Show resolved Hide resolved
@dailinsubjam dailinsubjam requested a review from tbro July 12, 2024 14:54
@dailinsubjam dailinsubjam requested a review from babdor July 16, 2024 17:50
@dailinsubjam
Copy link
Contributor Author

Also cc @babdor for future tooling with benchmarks

Copy link
Contributor

@tbro tbro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@dailinsubjam dailinsubjam merged commit 0331a90 into main Jul 17, 2024
16 checks passed
@dailinsubjam dailinsubjam deleted the sishan/benchmark branch July 17, 2024 18:59
@dailinsubjam dailinsubjam linked an issue Jul 17, 2024 that may be closed by this pull request
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants