Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Multithreaded POW mining worker #9628

Closed

Conversation

Wizdave97
Copy link
Contributor

@Wizdave97 Wizdave97 commented Aug 25, 2021

What this PR does?

This PR modifies the sc_consensus_pow::start_mining worker to return a clonable channel that recieves a broadcast of the latest mining metadata, so nodes can try to compute the seal using a multithreaded approach.

@Wizdave97 Wizdave97 requested a review from sorpaas as a code owner August 25, 2021 19:15
@cla-bot-2021
Copy link

cla-bot-2021 bot commented Aug 25, 2021

User @Wizdave97, please sign the CLA here.

@Wizdave97 Wizdave97 changed the title Multithreaded compatible POW mining Multithreading compatible POW mining Aug 25, 2021
@sorpaas
Copy link
Member

sorpaas commented Aug 25, 2021

Multithreaded mining is already working well. Just distribute MiningWorker across multiple threads and call MiningWorker::metadata, MiningWorker::submit in each threads.

I don't really like the approach in this PR though. It forces the mining thread to be designed with a certain lifecycle style, which is not always possible. Some mining algorithms have certain "tricks" that coordinates multithreading themselves. For them with this PR it's unusable. The pow module should strive to be as generic as possible to support as many Substrate chains as possible.

Also please remove the accidental workspace file addition and put the ss58 registry change into a separate PR.

@Wizdave97 Wizdave97 changed the title Multithreading compatible POW mining Multithreaded POW mining worker Aug 25, 2021
tomaka and others added 14 commits August 25, 2021 21:52
…ytech#9202)

* Support multiple, mergable vesting schedules

* Update node runtime

* Remove some TODO design questions and put them as commennts

* Update frame/vesting/src/benchmarking.rs

* Syntax and comment clean up

* Create filter enum for removing schedules

* Dry vesting calls with do_vest

* Improve old benchmarks to account for max schedules

* Update WeightInfo trait and make dummy fns

* Add merge_schedule weights

* Explicitly test multiple vesting scheudles

* Make new vesting tests more more clear

* Apply suggestions from code review

* Update remove_vesting_schedule to error with no index

* Try reduce spacing diff

* Apply suggestions from code review

* Use get on vesting for bounds check; check origin first

* No filter tuple; various simplifications

* unwrap or default when getting user schedules

* spaces be gone

* ReadMe fixes

* Update frame/vesting/src/lib.rs

Co-authored-by: Peter Goodspeed-Niklaus <coriolinus@users.noreply.github.com>

* address some comments for docs

* merge sched docs

* Apply suggestions from code review

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>

* log error when trying to push to vesting vec

* use let Some, not is_some

* remove_vesting_schedule u32, not optin

* new not try_new, create validate builder; VestingInfo

* Merge prep: break out tests and mock

* Add files forgot to include in merge

* revert some accidental changes to merged files

* Revert remaining accidental file changes

* More revert of accidental file change

* Try to reduce diff on tests

* namespace Vesting; check key when key should not exist;

* ending_block throws error on per_block of 0

* Try improve merge vesting info comment

* Update frame/vesting/src/lib.rs

Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>

* add validate + correct; handle duration > blocknumber

* Move vesting_info module to its own file

* Seperate Vesting/locks updates from writing

* Add can_add_vesting schedule

* Adjust min vested transfer to be greater than all ED

* Initial integrity test impl

* merge_finished_and_yet_to_be_started_schedules

* Make sure to assert storage items are cleaned up

* Migration initial impl (not tested)

* Correct try-runtime hooks

* Apply suggestions from code review

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>

* header

* WIP: improve benchmarks

* Benchmarking working

* benchmarking: step over max schedules

* cargo run --release --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark --chain=dev --steps=50 --repeat=20 --pallet=pallet_vesting --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/vesting/src/weights.rs --template=./.maintain/frame-weight-template.hbs

* Simplify APIs by accepting vec; convert to bounded on write

* Test:  build_genesis_has_storage_version_v1

* Test more error cases

* Hack to get polkadot weights to work; should revert later

* Improve benchmarking; works on polkadot

* cargo run --release --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark --chain=dev --steps=50 --repeat=20 --pallet=pallet_vesting --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/vesting/src/weights.rs --template=./.maintain/frame-weight-template.hbs

* WIP override storage

* Set storage not working example

* Remove unused tests

* VestingInfo: make public, derive MaxEndcodedLen

* Rename ending_block to ending_block_as_balance

* Superificial improvements

* Check for end block infinite, not just duration

* More superficial update

* Update tests

* Test vest with multi schedule

* Don't use half max balance in benchmarks

* Use debug_assert when locked is unexpected 0

* Implement exec_action

* Simplify per_block calc in vesting_info

* VestingInfo.validate in add_vesting_schedule & can_add_vesting_schedule

* Simplify post migrate check

* Remove merge event

* Minor benchmarking updates

* Remove VestingInfo.correct

* per_block accesor max with 1

* Improve comment

* Remoe debug

* Fix add schedule comment

* Apply suggestions from code review

Co-authored-by: Peter Goodspeed-Niklaus <coriolinus@users.noreply.github.com>

* no ref for should_remove param

* Remove unused vestingaction derive

* Asserts to show balance unlock in merge benchmark

* Remove unused imports

* trivial

* Fix benchmark asserts to handle non-multiple of 20 locked

* Add generate_storage_info

* migration :facepalm

* Remove per_block 0 logic

* Update frame/vesting/src/lib.rs

* Do not check for ending later than greatest block

* Apply suggestions from code review

* Benchmarks: simplify vesting schedule creation

* Add log back for migration

* Add note in ext docs explaining that all schedules will vest

* Make integrity test work

* Improve integrity test

* Remove unnescary type param from VestingInfo::new

* Remove unnescary resut for ending_block_as_balance

* Remove T param from ending_block_as_balance

* Reduce visibility of raw_per_block

* Remove unused type param for validate

* update old comment

* Make log a dep; log warn in migrate

* VestingInfo.validate returns Err(()), no T type param

* Try improve report_schedule_updates

* is_valid, not validate

* revert node runtime reorg;

* change schedule validity check to just warning

* Simplify merge_vesting_info return type

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>

* Add warning for migration

* Fix indentation

* Delete duplicate warnings

* Reduce diff in node runtime

* Fix benchmark build

* Upgrade cargo.toml to use 4.0.0-dev

* Cleanup

* MaxVestingSchedulesGetter initial impl

* MinVestedTransfer getter inintial impl

* Test MaxVestingSchedules & MinVestedTransfer getters; use getters in benchmarks

* Run cargo fmt

* Revert MinVestedTransfer & MaxVestingSchedules getters; Add integrity test

* Make MAX_VESTING_SCHEDULES a const

* fmt

* WIP: benchmark improvements

* Finish benchmark update

* Add test for transfer to account with less than ed

* Rm min_new_account_transfer; move sp-io to dev-dep

* Reduce cargo.toml diff

* Explain MAX_VESTING_SCHEDULES choice

* Fix after merge

* Try fix CI complaints

* cargo run --quiet --release --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark --chain=dev --steps=50 --repeat=20 --pallet=pallet_vesting --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/vesting/src/weights.rs --template=./.maintain/frame-weight-template.hbs

* cargo run --quiet --release --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark --chain=dev --steps=50 --repeat=20 --pallet=pallet_vesting --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/vesting/src/weights.rs --template=./.maintain/frame-weight-template.hbs

* fmt

* trigger

* fmt

Co-authored-by: Parity Bot <admin@parity.io>
Co-authored-by: Peter Goodspeed-Niklaus <coriolinus@users.noreply.github.com>
Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: kianenigma <kian@parity.io>
prefix 13 for parachain
prefix 113 for sidechains and offchain workers involving a runtime
…ritytech#9611)

Before we fetched the runtime code from the `TrieBackend` and this lead
to not using the storage cache. Thus, we recalculated the storage hash
for the runtime code on every call into the runtime and this killed the
performance on parachains block authoring. The solution is to fetch the
runtime code from the storage cache, to make sure we use the cached
storage cache.
* Better RPC prometehus metrics.

* Add session metrics.

* Add counting requests as well.

* Fix type for web build.

* Fix browser-node

* Filter out unknown method names.

* Change Gauge to Counters

* Use micros instead of millis.

* cargo fmt

* Update client/rpc-servers/src/lib.rs

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* move log to separate lines.

* Fix compilation.

* cargo +nightly fmt --all

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
* Upgrade tokio to 1.10

* Fix test runner

* Try fix it

* Update Cargo.lock

* Review feedback

* ahhhh

* FML

* FMT

* Fix tests
The old implementation was listening for storage changes and every time
a block changed the `CODE` storage field, it checked if the runtime
version changed. It used the best block to compare against the latest
known runtime version. It could happen that you processed the storage
notification of block Y and checked the runtime version of block X (the
current best block). This is also what happened on parachains.
Parachains import blocks and set the new best block in a later step.
This means we imported the block that changed the code, got notified and
checked the runtime version of the current best block (which would still
be the parent of the block that changed the runtime). As the parent did
not changed the runtime, the runtime version also did not changed and we
never notified the subscribers.

The new implementation now switches to listen for best imported blocks.
Every time we import a new best block, we check its runtime version
against the latest known runtime version. As we also send a notification
when the parachains sets a block as new best block, we will trigger this
code path correctly. It moves some computation from checking if the key
was modified to getting the runtime version. As fetching the runtime
version is a rather common pattern, it should not make any big
difference performancewise.
* pallet-proxy: emit events on proxy added.

* Apply review suggestions.
* Embed wasmi into the runtime

* cargo run --quiet --release --features=runtime-benchmarks --manifest-path=bin/node/cli/Cargo.toml -- benchmark --chain=dev --steps=50 --repeat=20 --pallet=pallet_contracts --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./frame/contracts/src/weights.rs --template=./.maintain/frame-weight-template.hbs

Co-authored-by: Parity Benchmarking Bot <admin@parity.io>
…9621)

* expose storage prefix generation, remove duplicate code

* remove more duplicate code

* clean up import

* fix io test

* remove slicing

* Update frame/support/src/storage/mod.rs

Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>

Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>

fix: revert unneccessary changes
@Wizdave97 Wizdave97 force-pushed the wizdave/pow-multithreaded-worker branch from 27a2613 to d5dd915 Compare August 25, 2021 21:08
@Wizdave97 Wizdave97 requested review from andresilva, athei and a team as code owners August 25, 2021 21:08
@cla-bot-2021
Copy link

cla-bot-2021 bot commented Aug 25, 2021

User @coriolinus, please sign the CLA here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.