Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Benchmark Parachains Runtime: Tracking Issue #3850

Closed
14 tasks done
shawntabrizi opened this issue Sep 15, 2021 · 13 comments
Closed
14 tasks done

Benchmark Parachains Runtime: Tracking Issue #3850

shawntabrizi opened this issue Sep 15, 2021 · 13 comments
Labels
C3-medium PR touches the given topic and has a medium impact on builders.

Comments

@shawntabrizi
Copy link
Member

shawntabrizi commented Sep 15, 2021

This Issue will be used to track end to end benchmarking of the Parachains runtime:

https://github.com/paritytech/polkadot/tree/master/runtime/parachains

When checking off an item, please include a PR which includes the benchmark PR.

@shawntabrizi shawntabrizi added C3-medium PR touches the given topic and has a medium impact on builders. Q1-mentor labels Sep 15, 2021
@shawntabrizi
Copy link
Member Author

We are looking for help on this issue, so consider it if you are an external contributor and want to get more involved with benchmarking!

@hirschenberger
Copy link
Contributor

We are looking for help on this issue, so consider it if you are an external contributor and want to get more involved with benchmarking!

I'd like to help, so can I take @KiChjang 's changes as a guide what's to do? Anything that I have to take care of beforehand? So I would start with Displutes then.

@shawntabrizi
Copy link
Member Author

@hirschenberger happy to have you help. I dont really have a mental model of which pallets are easier than others, so maybe take a look a them and see what you can learn.

The main thing with any benchmarking is:

  • reading the pallet and understanding the code
  • determining the worst case scenario path for an extrinsic
  • creating a benchmark which sets up that worst case scenario, and executes the call
  • verifying the final resulting state is what you expect.

https://github.com/paritytech/substrate/tree/master/frame/benchmarking

@hirschenberger
Copy link
Contributor

hirschenberger commented Sep 17, 2021

@shawntabrizi
Ok, I read into it and will start with Dmp.

I assume I have to go through some bootstrapping process:

  • Writing the benches
  • Compile and run the benches to generate the weights.rs
  • Add the weights to the exrinsic declarations.
  • Commit and the final weigths are generated on the spec-hardware by CI

Is that correct?

EDIT:

The Dmp pallet does not contain an extrinsic but functions called by extrinsics. So we still need to benchmark these and return the determined weights to the callers?

@kianenigma
Copy link
Contributor

The HRMP is almost done, just needs to fix the inter-pallet calls (which I am not sure with which pattern we're going to manage them -- return weight? or register yourself?), and reviews. The code and benchamarks should be pretty self-explanatory. I'll be away for a week, if someone wants to finish it sooner, by my guest.

@emostov
Copy link
Contributor

emostov commented Sep 30, 2021

I am wondering what we should do with inter-pallet calls, as @kianenigma alluded too.

I am working on paras inherents and its rather complex to exhaust all possible code paths for cases with many inter-pallet calls and logic branches. As an example, the below code snippet is from inclusion::Pallet::enact_candidates. All users of enact_candidates ignore its returned weight. Additionally, the weights from the inter-pallet calls within enact_candidates are all just hardcoded DB accesses, which don't include computation time.

// enact the messaging facet of the candidate.
weight += <dmp::Pallet<T>>::prune_dmq(
receipt.descriptor.para_id,
commitments.processed_downward_messages,
);
weight += <ump::Pallet<T>>::receive_upward_messages(
receipt.descriptor.para_id,
commitments.upward_messages,
);
weight += <hrmp::Pallet<T>>::prune_hrmp(
receipt.descriptor.para_id,
T::BlockNumber::from(commitments.hrmp_watermark),
);
weight += <hrmp::Pallet<T>>::queue_outbound_hrmp(
receipt.descriptor.para_id,
commitments.horizontal_messages,
);
Self::deposit_event(Event::<T>::CandidateIncluded(
plain,
commitments.head_data.clone(),
core_index,
backing_group,
));
weight +
<paras::Pallet<T>>::note_new_head(
receipt.descriptor.para_id,
commitments.head_data,
relay_parent_number,
)
}

From my current PoV it seems like the benchmarking code would be more maintainable if most inter-pallet calls where benchmarked on their own, but not sure if there are some negative ramifications I have not thought through yet.

I am wondering if others think we should try and weigh these inter pallet functions on their own? And if we do want to do that, should they self-register their weights or always return them to their user?

@hirschenberger
Copy link
Contributor

hirschenberger commented Oct 3, 2021

I have a noob problem in the ump-pallet PR #3889. I want to reuse some functions from the crate's tests module but I can't access them for some reason I don't see.

See here for the error. Maybe somebody can throw an eye on it and help me?

@emostov
Copy link
Contributor

emostov commented Oct 4, 2021

@hirschenberger from a quick glance I think the issue is that the test module is feature gated with #[cfg(test)] and the runtime-benchmarks feature does not compile them. A quick fix would be to change #[cfg(test)] to #[cfg(any(test, feature = "runtime-benchmarks"))]. I think the better thing to do though would be to pull out the helper functions you want from the test module to some test utils/helper module and feature that with #[cfg(any(test, feature = "runtime-benchmarks"))].

@hirschenberger
Copy link
Contributor

@hirschenberger from a quick glance I think the issue is that the test module is feature gated with #[cfg(test)] and the runtime-benchmarks feature does not compile them. A quick fix would be to change #[cfg(test)] to #[cfg(any(test, feature = "runtime-benchmarks"))]. I think the better thing to do though would be to pull out the helper functions you want from the test module to some test utils/helper module and feature that with #[cfg(any(test, feature = "runtime-benchmarks"))].

I thought runtime-benchmarks implies test because here the GenesisConfigBuilder is found in the tests module. That's what confuses me.

@emostov
Copy link
Contributor

emostov commented Oct 4, 2021

@hirschenberger from a quick glance I think the issue is that the test module is feature gated with #[cfg(test)] and the runtime-benchmarks feature does not compile them. A quick fix would be to change #[cfg(test)] to #[cfg(any(test, feature = "runtime-benchmarks"))]. I think the better thing to do though would be to pull out the helper functions you want from the test module to some test utils/helper module and feature that with #[cfg(any(test, feature = "runtime-benchmarks"))].

I thought runtime-benchmarks implies test because here the GenesisConfigBuilder is found in the tests module. That's what confuses me.

That macro, frame_benchmarking::impl_benchmark_test_suite!, generates a module that is feature gated by test; thus it is alway able to use GenesisConfigBuilder. You can see that here: https://github.com/paritytech/substrate/blob/1c2e9a8c6d073d924f867cd883138630c9dff9f7/frame/benchmarking/src/lib.rs#L1554

I would suspect all your code compiles when you do cargo test --features runtime-benchmarks ... but does not work when you do cargo check --features runtime-benchmarks.

Basically what I would do is pull the functions you need for benchmarking out of the tests module and put them in the mock or their own module and then feature gate them like this:

https://github.com/paritytech/substrate/blob/c000780dba99a611fadbf83873073e024be1be0b/frame/bags-list/src/mock.rs#L42-L43

@hirschenberger
Copy link
Contributor

I would be ready for an initial review: #3889

@hirschenberger
Copy link
Contributor

Can somebody please throw an eye on #3889, the discussion seems stuck and I don't know how to get this landed.

@hirschenberger
Copy link
Contributor

@shawntabrizi #3889 is merged, I think we're done here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
C3-medium PR touches the given topic and has a medium impact on builders.
Projects
None yet
Development

No branches or pull requests

4 participants