Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: 📈 finish benchmarking the storage-providers pallet's extrinsics #344

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

TDemeco
Copy link
Contributor

@TDemeco TDemeco commented Jan 30, 2025

This PR benchmarks the two remaining unbenchmarked extrinsics from the storage-providers pallet (top_up_deposit and delete_provider). To do so, it:

  • Adds two variables to the MainStorageProvider type that holds the metadata of a MSP:
    • amount_of_value_props: the total amount of value propositions that this MSP has added to its repertoire. This is used to know how much weight is going to be needed to clean those value propositions from storage.
      • Logic to add or substract one to this value for tracking was added where needed.
    • amount_of_buckets: the current amount of buckets that the MSP is storing. This is also used to know how much weight is going to be needed to delete or move those buckets.
      • This variable uses a type defined in the pallet's configuration (BucketCount) to allow for flexibility when defining the maximum amount of buckets the user of the pallet deems a MSP can store.
      • Logic to add or substract one to this value for tracking was added where needed.
      • Note: currently, the buckets are not being moved nor deleted when cleaning up storage. This should be designed and implemented in another PR.
  • Adds comments to all fields of the MSP's metadata, for clarity.
  • Fixes the msp_sign_off extrinsic. Previously it was not clearing up the storage occupied by the MSP's value propositions, now it correctly clears everything.
    • Since this changes the weight requirement of the extrinsic the benchmark was refactored to account for this change, and now the required weight is a function of the amount of value propositions the MSP has.
  • Fixes the delete_provider extrinsic. Like msp_sign_off, it was not clearing up the storage used by the value propositions if the provider to delete was a MSP, but this time because drain_prefix was used but the iterator was never iterated so no elements were deleted.
    • Also added a cleanup of the MainStorageProviderIdsToBuckets storage, since with this extrinsic a MSP can be deleted while still having buckets. Like it was mentioned, specific logic to delete or move these buckets should be implemented in another PR.
    • Also added a call to stop all BSP cycles before deleting it, to avoid a non-existent BSP having to submit proofs and randomness seeds.
    • Updates the extrinsic test to check for all this scenarios.
  • Adds the benchmarks for both top_up_deposit and delete_provider.

Fixes issue #24

@TDemeco TDemeco changed the title feat: 📈 finish benchmarking the storage-providers pallet feat: 📈 finish benchmarking the storage-providers pallet's extrinsics Jan 30, 2025
MainStorageProviders::<T>::remove(&provider_id);
AccountIdToMainStorageProviderId::<T>::remove(msp.owner_account);
// TODO: The buckets of this MSP should be moved to another one somehow, we can't leave the user without its data because
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could emit an event with all buckets that were being stored by the deleted MSP so users can recover them with move bucket requests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, I added that!

Copy link
Contributor

@snowmead snowmead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome job!

@@ -498,6 +498,7 @@ impl pallet_storage_providers::Config for Runtime {
type StorageDataUnit = StorageDataUnit;
type StorageDataUnitAndBalanceConvert = StorageDataUnitAndBalanceConverter;
type SpCount = u32;
type BucketCount = u32;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we set this to be u128 to avoid any chances of overflowing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Contributor

@santikaplan santikaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beast!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants