diff --git a/roadmap/implementors-guide/src/architecture.md b/roadmap/implementors-guide/src/architecture.md index e69c8fb051a8..8090c8c20ada 100644 --- a/roadmap/implementors-guide/src/architecture.md +++ b/roadmap/implementors-guide/src/architecture.md @@ -76,7 +76,7 @@ It is also helpful to divide Node-side behavior into two further categories: Net ``` -Node-side behavior is split up into various subsystems. Subsystems are long-lived workers that perform a particular category of work. Subsystems can communicate with each other, and do so via an [Overseer](/node/overseer.html) that prevents race conditions. +Node-side behavior is split up into various subsystems. Subsystems are long-lived workers that perform a particular category of work. Subsystems can communicate with each other, and do so via an [Overseer](node/overseer.html) that prevents race conditions. Runtime logic is divided up into Modules and APIs. Modules encapsulate particular behavior of the system. Modules consist of storage, routines, and entry-points. Routines are invoked by entry points, by other modules, upon block initialization or closing. Routines can read and alter the storage of the module. Entry-points are the means by which new information is introduced to a module and can limit the origins (user, root, parachain) that they accept being called by. Each block in the blockchain contains a set of Extrinsics. Each extrinsic targets a a specific entry point to trigger and which data should be passed to it. Runtime APIs provide a means for Node-side behavior to extract meaningful information from the state of a single fork. diff --git a/roadmap/implementors-guide/src/node/availability/availability-distribution.md b/roadmap/implementors-guide/src/node/availability/availability-distribution.md index 834d6e9b8911..a919aed6553c 100644 --- a/roadmap/implementors-guide/src/node/availability/availability-distribution.md +++ b/roadmap/implementors-guide/src/node/availability/availability-distribution.md @@ -2,7 +2,7 @@ Distribute availability erasure-coded chunks to validators. -After a candidate is backed, the availability of the PoV block must be confirmed by 2/3+ of all validators. Validating a candidate successfully and contributing it to being backable leads to the PoV and erasure-coding being stored in the [Availability Store](/node/utility/availability-store.html). +After a candidate is backed, the availability of the PoV block must be confirmed by 2/3+ of all validators. Validating a candidate successfully and contributing it to being backable leads to the PoV and erasure-coding being stored in the [Availability Store](../utility/availability-store.html). ## Protocol @@ -34,7 +34,7 @@ We re-attempt to send anything live to a peer upon any view update from that pee On our view change, for all live candidates, we will check if we have the PoV by issuing a `QueryPoV` message and waiting for the response. If the query returns `Some`, we will perform the erasure-coding and distribute all messages to peers that will accept them. -If we are operating as a validator, we note our index `i` in the validator set and keep the `i`th availability chunk for any live candidate, as we receive it. We keep the chunk and its merkle proof in the [Availability Store](/node/utility/availability-store.html) by sending a `StoreChunk` command. This includes chunks and proofs generated as the result of a successful `QueryPoV`. +If we are operating as a validator, we note our index `i` in the validator set and keep the `i`th availability chunk for any live candidate, as we receive it. We keep the chunk and its merkle proof in the [Availability Store](../utility/availability-store.html) by sending a `StoreChunk` command. This includes chunks and proofs generated as the result of a successful `QueryPoV`. > TODO: back-and-forth is kind of ugly but drastically simplifies the pruning in the availability store, as it creates an invariant that chunks are only stored if the candidate was actually backed > diff --git a/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md b/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md index 70815c3313ca..4fff0859562a 100644 --- a/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md +++ b/roadmap/implementors-guide/src/node/availability/bitfield-distribution.md @@ -20,6 +20,6 @@ Output: ## Functionality -This is implemented as a gossip system. Register a [network bridge](/node/utility/network-bridge.html) event producer on startup and track peer connection, view change, and disconnection events. Only accept bitfields relevant to our current view and only distribute bitfields to other peers when relevant to their most recent view. Check bitfield signatures in this subsystem and accept and distribute only one bitfield per validator. +This is implemented as a gossip system. Register a [network bridge](../utility/network-bridge.html) event producer on startup and track peer connection, view change, and disconnection events. Only accept bitfields relevant to our current view and only distribute bitfields to other peers when relevant to their most recent view. Check bitfield signatures in this subsystem and accept and distribute only one bitfield per validator. When receiving a bitfield either from the network or from a `DistributeBitfield` message, forward it along to the block authorship (provisioning) subsystem for potential inclusion in a block. diff --git a/roadmap/implementors-guide/src/node/availability/bitfield-signing.md b/roadmap/implementors-guide/src/node/availability/bitfield-signing.md index 36cbba4f2df3..20db290f99c8 100644 --- a/roadmap/implementors-guide/src/node/availability/bitfield-signing.md +++ b/roadmap/implementors-guide/src/node/availability/bitfield-signing.md @@ -20,6 +20,6 @@ If not running as a validator, do nothing. - Determine our validator index `i`, the set of backed candidates pending availability in `r`, and which bit of the bitfield each corresponds to. - > TODO: wait T time for availability distribution? -- Start with an empty bitfield. For each bit in the bitfield, if there is a candidate pending availability, query the [Availability Store](/node/utility/availability-store.html) for whether we have the availability chunk for our validator index. +- Start with an empty bitfield. For each bit in the bitfield, if there is a candidate pending availability, query the [Availability Store](../utility/availability-store.html) for whether we have the availability chunk for our validator index. - For all chunks we have, set the corresponding bit in the bitfield. - Sign the bitfield and dispatch a `BitfieldDistribution::DistributeBitfield` message. diff --git a/roadmap/implementors-guide/src/node/backing/candidate-backing.md b/roadmap/implementors-guide/src/node/backing/candidate-backing.md index c4737f75e01e..211c2ce42d05 100644 --- a/roadmap/implementors-guide/src/node/backing/candidate-backing.md +++ b/roadmap/implementors-guide/src/node/backing/candidate-backing.md @@ -2,17 +2,17 @@ The Candidate Backing subsystem ensures every parablock considered for relay block inclusion has been seconded by at least one validator, and approved by a quorum. Parablocks for which no validator will assert correctness are discarded. If the block later proves invalid, the initial backers are slashable; this gives polkadot a rational threat model during subsequent stages. -Its role is to produce backable candidates for inclusion in new relay-chain blocks. It does so by issuing signed [`Statement`s](/type-definitions.html#statement-type) and tracking received statements signed by other validators. Once enough statements are received, they can be combined into backing for specific candidates. +Its role is to produce backable candidates for inclusion in new relay-chain blocks. It does so by issuing signed [`Statement`s](../../type-definitions.html#statement-type) and tracking received statements signed by other validators. Once enough statements are received, they can be combined into backing for specific candidates. Note that though the candidate backing subsystem attempts to produce as many backable candidates as possible, it does _not_ attempt to choose a single authoritative one. The choice of which actually gets included is ultimately up to the block author, by whatever metrics it may use; those are opaque to this subsystem. -Once a sufficient quorum has agreed that a candidate is valid, this subsystem notifies the [Provisioner](/node/utility/provisioner.html), which in turn engages block production mechanisms to include the parablock. +Once a sufficient quorum has agreed that a candidate is valid, this subsystem notifies the [Provisioner](../utility/provisioner.html), which in turn engages block production mechanisms to include the parablock. ## Protocol -The [Candidate Selection subsystem](/node/backing/candidate-selection.html) is the primary source of non-overseer messages into this subsystem. That subsystem generates appropriate [`CandidateBackingMessage`s](/type-definitions.html#candidate-backing-message), and passes them to this subsystem. +The [Candidate Selection subsystem](candidate-selection.html) is the primary source of non-overseer messages into this subsystem. That subsystem generates appropriate [`CandidateBackingMessage`s](../../type-definitions.html#candidate-backing-message), and passes them to this subsystem. -This subsystem validates the candidates and generates an appropriate [`Statement`](/type-definitions.html#statement-type). All `Statement`s are then passed on to the [Statement Distribution subsystem](/node/backing/statement-distribution.html) to be gossiped to peers. When this subsystem decides that a candidate is invalid, and it was recommended to us to second by our own Candidate Selection subsystem, a message is sent to the Candidate Selection subsystem with the candidate's hash so that the collator which recommended it can be penalized. +This subsystem validates the candidates and generates an appropriate [`Statement`](../../type-definitions.html#statement-type). All `Statement`s are then passed on to the [Statement Distribution subsystem](statement-distribution.html) to be gossiped to peers. When this subsystem decides that a candidate is invalid, and it was recommended to us to second by our own Candidate Selection subsystem, a message is sent to the Candidate Selection subsystem with the candidate's hash so that the collator which recommended it can be penalized. ## Functionality @@ -20,8 +20,8 @@ The subsystem should maintain a set of handles to Candidate Backing Jobs that ar ### On Overseer Signal -* If the signal is an [`OverseerSignal`](/type-definitions.html#overseer-signal)`::StartWork(relay_parent)`, spawn a Candidate Backing Job with the given relay parent, storing a bidirectional channel with the Candidate Backing Job in the set of handles. -* If the signal is an [`OverseerSignal`](/type-definitions.html#overseer-signal)`::StopWork(relay_parent)`, cease the Candidate Backing Job under that relay parent, if any. +* If the signal is an [`OverseerSignal`](../../type-definitions.html#overseer-signal)`::StartWork(relay_parent)`, spawn a Candidate Backing Job with the given relay parent, storing a bidirectional channel with the Candidate Backing Job in the set of handles. +* If the signal is an [`OverseerSignal`](../../type-definitions.html#overseer-signal)`::StopWork(relay_parent)`, cease the Candidate Backing Job under that relay parent, if any. ### On `CandidateBackingMessage` @@ -39,7 +39,7 @@ The subsystem should maintain a set of handles to Candidate Backing Jobs that ar The Candidate Backing Job represents the work a node does for backing candidates with respect to a particular relay-parent. -The goal of a Candidate Backing Job is to produce as many backable candidates as possible. This is done via signed [`Statement`s](/type-definitions.html#statement-type) by validators. If a candidate receives a majority of supporting Statements from the Parachain Validators currently assigned, then that candidate is considered backable. +The goal of a Candidate Backing Job is to produce as many backable candidates as possible. This is done via signed [`Statement`s](../../type-definitions.html#statement-type) by validators. If a candidate receives a majority of supporting Statements from the Parachain Validators currently assigned, then that candidate is considered backable. ### On Startup diff --git a/roadmap/implementors-guide/src/node/backing/candidate-selection.md b/roadmap/implementors-guide/src/node/backing/candidate-selection.md index 0b9754145d2d..a5f56c21df3e 100644 --- a/roadmap/implementors-guide/src/node/backing/candidate-selection.md +++ b/roadmap/implementors-guide/src/node/backing/candidate-selection.md @@ -6,9 +6,9 @@ This subsystem includes networking code for communicating with collators, and tr This subsystem is only ever interested in parablocks assigned to the particular parachain which this validator is currently handling. -New parablock candidates may arrive from a potentially unbounded set of collators. This subsystem chooses either 0 or 1 of them per relay parent to second. If it chooses to second a candidate, it sends an appropriate message to the [Candidate Backing subsystem](/node/backing/candidate-backing.html) to generate an appropriate [`Statement`](/type-definitions.html#statement-type). +New parablock candidates may arrive from a potentially unbounded set of collators. This subsystem chooses either 0 or 1 of them per relay parent to second. If it chooses to second a candidate, it sends an appropriate message to the [Candidate Backing subsystem](candidate-backing.html) to generate an appropriate [`Statement`](../../type-definitions.html#statement-type). -In the event that a parablock candidate proves invalid, this subsystem will receive a message back from the Candidate Backing subsystem indicating so. If that parablock candidate originated from a collator, this subsystem will blacklist that collator. If that parablock candidate originated from a peer, this subsystem generates a report for the [Misbehavior Arbitration subsystem](/node/utility/misbehavior-arbitration.html). +In the event that a parablock candidate proves invalid, this subsystem will receive a message back from the Candidate Backing subsystem indicating so. If that parablock candidate originated from a collator, this subsystem will blacklist that collator. If that parablock candidate originated from a peer, this subsystem generates a report for the [Misbehavior Arbitration subsystem](../utility/misbehavior-arbitration.html). ## Protocol @@ -17,7 +17,7 @@ Input: None Output: - Validation requests to Validation subsystem -- [`CandidateBackingMessage`](/type-definitions.html#candidate-backing-message)`::Second` +- [`CandidateBackingMessage`](../../type-definitions.html#candidate-backing-message)`::Second` - Peer set manager: report peers (collators who have misbehaved) ## Functionality diff --git a/roadmap/implementors-guide/src/node/backing/pov-distribution.md b/roadmap/implementors-guide/src/node/backing/pov-distribution.md index 47d04c3afa90..60493d481272 100644 --- a/roadmap/implementors-guide/src/node/backing/pov-distribution.md +++ b/roadmap/implementors-guide/src/node/backing/pov-distribution.md @@ -1,6 +1,6 @@ # PoV Distribution -This subsystem is responsible for distributing PoV blocks. For now, unified with [Statement Distribution subsystem](/node/backing/statement-distribution.html). +This subsystem is responsible for distributing PoV blocks. For now, unified with [Statement Distribution subsystem](statement-distribution.html). ## Protocol diff --git a/roadmap/implementors-guide/src/node/backing/statement-distribution.md b/roadmap/implementors-guide/src/node/backing/statement-distribution.md index fe5ef9c34178..7c0132a2f0ca 100644 --- a/roadmap/implementors-guide/src/node/backing/statement-distribution.md +++ b/roadmap/implementors-guide/src/node/backing/statement-distribution.md @@ -22,7 +22,7 @@ Implemented as a gossip protocol. Register a network event producer on startup. Statement Distribution is the only backing subsystem which has any notion of peer nodes, who are any full nodes on the network. Validators will also act as peer nodes. -It is responsible for signing statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](/node/utility/misbehavior-arbitration.html). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes, who distribute statements by validators. On receiving a signed statement from a peer, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](/node/backing/candidate-backing.html) to handle the validator's statement. +It is responsible for signing statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.html). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes, who distribute statements by validators. On receiving a signed statement from a peer, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.html) to handle the validator's statement. Track equivocating validators and stop accepting information from them. Forward double-vote proofs to the double-vote reporting system. Establish a data-dependency order: @@ -35,7 +35,7 @@ The Statement Distribution subsystem sends statements to peer nodes and detects ## Peer Receipt State Machine -There is a very simple state machine which governs which messages we are willing to receive from peers. Not depicted in the state machine: on initial receipt of any [`SignedStatement`](/type-definitions.html#signed-statement-type), validate that the provided signature does in fact sign the included data. Note that each individual parablock candidate gets its own instance of this state machine; it is perfectly legal to receive a `Valid(X)` before a `Seconded(Y)`, as long as a `Seconded(X)` has been received. +There is a very simple state machine which governs which messages we are willing to receive from peers. Not depicted in the state machine: on initial receipt of any [`SignedStatement`](../../type-definitions.html#signed-statement-type), validate that the provided signature does in fact sign the included data. Note that each individual parablock candidate gets its own instance of this state machine; it is perfectly legal to receive a `Valid(X)` before a `Seconded(Y)`, as long as a `Seconded(X)` has been received. A: Initial State. Receive `SignedStatement(Statement::Second)`: extract `Statement`, forward to Candidate Backing, proceed to B. Receive any other `SignedStatement` variant: drop it. B: Receive any `SignedStatement`: extract `Statement`, forward to Candidate Backing. Receive `OverseerMessage::StopWork`: proceed to C. diff --git a/roadmap/implementors-guide/src/node/overseer.md b/roadmap/implementors-guide/src/node/overseer.md index 3f8d37d46d04..e8bc06913417 100644 --- a/roadmap/implementors-guide/src/node/overseer.md +++ b/roadmap/implementors-guide/src/node/overseer.md @@ -24,7 +24,7 @@ The hierarchy of subsystems: ``` -The overseer determines work to do based on block import events and block finalization events. It does this by keeping track of the set of relay-parents for which work is currently being done. This is known as the "active leaves" set. It determines an initial set of active leaves on startup based on the data on-disk, and uses events about blockchain import to update the active leaves. Updates lead to [`OverseerSignal`](/type-definitions.html#overseer-signal)`::StartWork` and [`OverseerSignal`](/type-definitions.html#overseer-signal)`::StopWork` being sent according to new relay-parents, as well as relay-parents to stop considering. Block import events inform the overseer of leaves that no longer need to be built on, now that they have children, and inform us to begin building on those children. Block finalization events inform us when we can stop focusing on blocks that appear to have been orphaned. +The overseer determines work to do based on block import events and block finalization events. It does this by keeping track of the set of relay-parents for which work is currently being done. This is known as the "active leaves" set. It determines an initial set of active leaves on startup based on the data on-disk, and uses events about blockchain import to update the active leaves. Updates lead to [`OverseerSignal`](../type-definitions.html#overseer-signal)`::StartWork` and [`OverseerSignal`](../type-definitions.html#overseer-signal)`::StopWork` being sent according to new relay-parents, as well as relay-parents to stop considering. Block import events inform the overseer of leaves that no longer need to be built on, now that they have children, and inform us to begin building on those children. Block finalization events inform us when we can stop focusing on blocks that appear to have been orphaned. The overseer's logic can be described with these functions: diff --git a/roadmap/implementors-guide/src/node/subsystems-and-jobs.md b/roadmap/implementors-guide/src/node/subsystems-and-jobs.md index 9f622ab54ec0..9cbced3f41bc 100644 --- a/roadmap/implementors-guide/src/node/subsystems-and-jobs.md +++ b/roadmap/implementors-guide/src/node/subsystems-and-jobs.md @@ -2,7 +2,7 @@ In this section we define the notions of Subsystems and Jobs. These are guidelines for how we will employ an architecture of hierarchical state machines. We'll have a top-level state machine which oversees the next level of state machines which oversee another layer of state machines and so on. The next sections will lay out these guidelines for what we've called subsystems and jobs, since this model applies to many of the tasks that the Node-side behavior needs to encompass, but these are only guidelines and some Subsystems may have deeper hierarchies internally. -Subsystems are long-lived worker tasks that are in charge of performing some particular kind of work. All subsystems can communicate with each other via a well-defined protocol. Subsystems can't generally communicate directly, but must coordinate communication through an [Overseer](/node/overseer.html), which is responsible for relaying messages, handling subsystem failures, and dispatching work signals. +Subsystems are long-lived worker tasks that are in charge of performing some particular kind of work. All subsystems can communicate with each other via a well-defined protocol. Subsystems can't generally communicate directly, but must coordinate communication through an [Overseer](overseer.html), which is responsible for relaying messages, handling subsystem failures, and dispatching work signals. Most work that happens on the Node-side is related to building on top of a specific relay-chain block, which is contextually known as the "relay parent". We call it the relay parent to explicitly denote that it is a block in the relay chain and not on a parachain. We refer to the parent because when we are in the process of building a new block, we don't know what that new block is going to be. The parent block is our only stable point of reference, even though it is usually only useful when it is not yet a parent but in fact a leaf of the block-DAG expected to soon become a parent (because validators are authoring on top of it). Furthermore, we are assuming a forkful blockchain-extension protocol, which means that there may be multiple possible children of the relay-parent. Even if the relay parent has multiple children blocks, the parent of those children is the same, and the context in which those children is authored should be the same. The parent block is the best and most stable reference to use for defining the scope of work items and messages, and is typically referred to by its cryptographic hash. diff --git a/roadmap/implementors-guide/src/node/utility/candidate-validation.md b/roadmap/implementors-guide/src/node/utility/candidate-validation.md index f3e6f6581c78..30c110fdef1e 100644 --- a/roadmap/implementors-guide/src/node/utility/candidate-validation.md +++ b/roadmap/implementors-guide/src/node/utility/candidate-validation.md @@ -6,7 +6,7 @@ This subsystem is responsible for handling candidate validation requests. It is Input: -- [`CandidateValidationMessage`](/type-definitions.html#validation-request-type) +- [`CandidateValidationMessage`](../../type-definitions.html#validation-request-type) ## Functionality diff --git a/roadmap/implementors-guide/src/node/utility/provisioner.md b/roadmap/implementors-guide/src/node/utility/provisioner.md index 460ee9c33d34..a754a672a6aa 100644 --- a/roadmap/implementors-guide/src/node/utility/provisioner.md +++ b/roadmap/implementors-guide/src/node/utility/provisioner.md @@ -10,11 +10,11 @@ There are several distinct types of provisionable data, but they share this prop ### Backed Candidates -The block author can choose 0 or 1 backed parachain candidates per parachain; the only constraint is that each backed candidate has the appropriate relay parent. However, the choice of a backed candidate must be the block author's; the provisioner must ensure that block authors are aware of all available [`BackedCandidate`s](/type-definitions.html#backed-candidate). +The block author can choose 0 or 1 backed parachain candidates per parachain; the only constraint is that each backed candidate has the appropriate relay parent. However, the choice of a backed candidate must be the block author's; the provisioner must ensure that block authors are aware of all available [`BackedCandidate`s](../../type-definitions.html#backed-candidate). ### Signed Bitfields -[Signed bitfields](/type-definitions.html#signed-availability-bitfield) are attestations from a particular validator about which candidates it believes are available. +[Signed bitfields](../../type-definitions.html#signed-availability-bitfield) are attestations from a particular validator about which candidates it believes are available. ### Misbehavior Reports @@ -26,13 +26,13 @@ Note that there is no mechanism in place which forces a block author to include The dispute inherent is similar to a misbehavior report in that it is an attestation of misbehavior on the part of a validator or group of validators. Unlike a misbehavior report, it is not self-contained: resolution requires coordinated action by several validators. The canonical example of a dispute inherent involves an approval checker discovering that a set of validators has improperly approved an invalid parachain block: resolving this requires the entire validator set to re-validate the block, so that the minority can be slashed. -Dispute resolution is complex and is explained in substantially more detail [here](/runtime/validity.html). +Dispute resolution is complex and is explained in substantially more detail [here](../../runtime/validity.html). > TODO: The provisioner is responsible for selecting remote disputes to replay. Let's figure out the details. ## Protocol -Input: [`ProvisionerMessage`](/type-definitions.html#provisioner-message). Backed candidates come from the [Candidate Backing subsystem](/node/backing/candidate-backing.html), signed bitfields come from the [Bitfield Distribution subsystem](/node/availability/bitfield-distribution.html), and misbehavior reports and disputes come from the [Misbehavior Arbitration subsystem](/node/utility/misbehavior-arbitration.html). +Input: [`ProvisionerMessage`](../../type-definitions.html#provisioner-message). Backed candidates come from the [Candidate Backing subsystem](../backing/candidate-backing.html), signed bitfields come from the [Bitfield Distribution subsystem](../availability/bitfield-distribution.html), and misbehavior reports and disputes come from the [Misbehavior Arbitration subsystem](misbehavior-arbitration.html). At initialization, this subsystem has no outputs. Block authors can send a `ProvisionerMessage::RequestBlockAuthorshipData`, which includes a channel over which provisionable data can be sent. All appropriate provisionable data will then be sent over this channel, as it is received. diff --git a/roadmap/implementors-guide/src/node/validity/README.md b/roadmap/implementors-guide/src/node/validity/README.md index 11ef8108da74..ab39cf63510f 100644 --- a/roadmap/implementors-guide/src/node/validity/README.md +++ b/roadmap/implementors-guide/src/node/validity/README.md @@ -1,3 +1,3 @@ # Validity -The node validity subsystems exist to support the runtime [Validity module](/runtime/validity.html). Their behavior and specifications are as-yet undefined. +The node validity subsystems exist to support the runtime [Validity module](../../runtime/validity.html). Their behavior and specifications are as-yet undefined. diff --git a/roadmap/implementors-guide/src/parachains-overview.md b/roadmap/implementors-guide/src/parachains-overview.md index 110ebc5ee948..561811ee7fc9 100644 --- a/roadmap/implementors-guide/src/parachains-overview.md +++ b/roadmap/implementors-guide/src/parachains-overview.md @@ -18,11 +18,11 @@ Here is a description of the Inclusion Pipeline: the path a parachain block (or 1. Validators are selected and assigned to parachains by the Validator Assignment routine. 1. A collator produces the parachain block, which is known as a parachain candidate or candidate, along with a PoV for the candidate. -1. The collator forwards the candidate and PoV to validators assigned to the same parachain via the [Collation Distribution subsystem](/node/collators/collation-distribution.html). -1. The validators assigned to a parachain at a given point in time participate in the [Candidate Backing subsystem](/node/backing/candidate-backing.html) to validate candidates that were put forward for validation. Candidates which gather enough signed validity statements from validators are considered "backable". Their backing is the set of signed validity statements. +1. The collator forwards the candidate and PoV to validators assigned to the same parachain via the [Collation Distribution subsystem](node/collators/collation-distribution.html). +1. The validators assigned to a parachain at a given point in time participate in the [Candidate Backing subsystem](node/backing/candidate-backing.html) to validate candidates that were put forward for validation. Candidates which gather enough signed validity statements from validators are considered "backable". Their backing is the set of signed validity statements. 1. A relay-chain block author, selected by BABE, can note up to one (1) backable candidate for each parachain to include in the relay-chain block alongside its backing. A backable candidate once included in the relay-chain is considered backed in that fork of the relay-chain. 1. Once backed in the relay-chain, the parachain candidate is considered to be "pending availability". It is not considered to be included as part of the parachain until it is proven available. -1. In the following relay-chain blocks, validators will participate in the [Availability Distribution subsystem](/node/availability/availability-distribution.html) to ensure availability of the candidate. Information regarding the availability of the candidate will be noted in the subsequent relay-chain blocks. +1. In the following relay-chain blocks, validators will participate in the [Availability Distribution subsystem](node/availability/availability-distribution.html) to ensure availability of the candidate. Information regarding the availability of the candidate will be noted in the subsequent relay-chain blocks. 1. Once the relay-chain state machine has enough information to consider the candidate's PoV as being available, the candidate is considered to be part of the parachain and is graduated to being a full parachain block, or parablock for short. Note that the candidate can fail to be included in any of the following ways: diff --git a/roadmap/implementors-guide/src/runtime/README.md b/roadmap/implementors-guide/src/runtime/README.md index 2b25b1cf2035..5367146d7db2 100644 --- a/roadmap/implementors-guide/src/runtime/README.md +++ b/roadmap/implementors-guide/src/runtime/README.md @@ -21,7 +21,7 @@ We will split the logic of the runtime up into these modules: * Inclusion: handles the inclusion and availability of scheduled parachains and parathreads. * Validity: handles secondary checks and dispute resolution for included, available parablocks. -The [Initializer module](/runtime/initializer.html) is special - it's responsible for handling the initialization logic of the other modules to ensure that the correct initialization order and related invariants are maintained. The other modules won't specify a on-initialize logic, but will instead expose a special semi-private routine that the initialization module will call. The other modules are relatively straightforward and perform the roles described above. +The [Initializer module](initializer.html) is special - it's responsible for handling the initialization logic of the other modules to ensure that the correct initialization order and related invariants are maintained. The other modules won't specify a on-initialize logic, but will instead expose a special semi-private routine that the initialization module will call. The other modules are relatively straightforward and perform the roles described above. The Parachain Host operates under a changing set of validators. Time is split up into periodic sessions, where each session brings a potentially new set of validators. Sessions are buffered by one, meaning that the validators of the upcoming session are fixed and always known. Parachain Host runtime modules need to react to changes in the validator set, as it will affect the runtime logic for processing candidate backing, availability bitfields, and misbehavior reports. The Parachain Host modules can't determine ahead-of-time exactly when session change notifications are going to happen within the block (note: this depends on module initialization order again - better to put session before parachains modules). Ideally, session changes are always handled before initialization. It is clearly a problem if we compute validator assignments to parachains during initialization and then the set of validators changes. In the best case, we can recognize that re-initialization needs to be done. In the worst case, bugs would occur. @@ -33,7 +33,7 @@ There are 3 main ways that we can handle this issue: Although option 3 is the most comprehensive, it runs counter to our goal of simplicity. Option 1 means requiring the runtime to do redundant work at all sessions and will also mean, like option 3, that designing things in such a way that initialization can be rolled back and reapplied under the new environment. That leaves option 2, although it is a "nuclear" option in a way and requires us to constrain the parachain host to only run in full runtimes with a certain order of operations. -So the other role of the initializer module is to forward session change notifications to modules in the initialization order, throwing an unrecoverable error if the notification is received after initialization. Session change is the point at which the [Configuration Module](/runtime/configuration.html) updates the configuration. Most of the other modules will handle changes in the configuration during their session change operation, so the initializer should provide both the old and new configuration to all the other +So the other role of the initializer module is to forward session change notifications to modules in the initialization order, throwing an unrecoverable error if the notification is received after initialization. Session change is the point at which the [Configuration Module](configuration.html) updates the configuration. Most of the other modules will handle changes in the configuration during their session change operation, so the initializer should provide both the old and new configuration to all the other modules alongside the session change notification. This means that a session change notification should consist of the following data: ```rust diff --git a/roadmap/implementors-guide/src/runtime/configuration.md b/roadmap/implementors-guide/src/runtime/configuration.md index aaa0b7096f4d..d88769735fe6 100644 --- a/roadmap/implementors-guide/src/runtime/configuration.md +++ b/roadmap/implementors-guide/src/runtime/configuration.md @@ -1,8 +1,8 @@ # Configuration Module -This module is responsible for managing all configuration of the parachain host in-flight. It provides a central point for configuration updates to prevent races between configuration changes and parachain-processing logic. Configuration can only change during the session change routine, and as this module handles the session change notification first it provides an invariant that the configuration does not change throughout the entire session. Both the [scheduler](/runtime/scheduler.html) and [inclusion](/runtime/inclusion.html) modules rely on this invariant to ensure proper behavior of the scheduler. +This module is responsible for managing all configuration of the parachain host in-flight. It provides a central point for configuration updates to prevent races between configuration changes and parachain-processing logic. Configuration can only change during the session change routine, and as this module handles the session change notification first it provides an invariant that the configuration does not change throughout the entire session. Both the [scheduler](scheduler.html) and [inclusion](inclusion.html) modules rely on this invariant to ensure proper behavior of the scheduler. -The configuration that we will be tracking is the [`HostConfiguration`](/type-definitions.html#host-configuration) struct. +The configuration that we will be tracking is the [`HostConfiguration`](../type-definitions.html#host-configuration) struct. ## Storage diff --git a/roadmap/implementors-guide/src/runtime/inclusioninherent.md b/roadmap/implementors-guide/src/runtime/inclusioninherent.md index bd5ecc375a93..4b10e07e404c 100644 --- a/roadmap/implementors-guide/src/runtime/inclusioninherent.md +++ b/roadmap/implementors-guide/src/runtime/inclusioninherent.md @@ -16,7 +16,7 @@ Included: Option<()>, ## Entry Points -* `inclusion`: This entry-point accepts two parameters: [`Bitfields`](/type-definitions.html#signed-availability-bitfield) and [`BackedCandidates`](/type-definitions.html#backed-candidate). +* `inclusion`: This entry-point accepts two parameters: [`Bitfields`](../type-definitions.html#signed-availability-bitfield) and [`BackedCandidates`](../type-definitions.html#backed-candidate). 1. The `Bitfields` are first forwarded to the `process_bitfields` routine, returning a set of freed cores. Provide a `Scheduler::core_para` as a core-lookup to the `process_bitfields` routine. Annotate each of these freed cores with `FreedReason::Concluded`. 1. If `Scheduler::availability_timeout_predicate` is `Some`, invoke `Inclusion::collect_pending` using it, and add timed-out cores to the free cores, annotated with `FreedReason::TimedOut`. 1. Invoke `Scheduler::schedule(freed)` diff --git a/roadmap/implementors-guide/src/runtime/initializer.md b/roadmap/implementors-guide/src/runtime/initializer.md index 4456cf99d1b2..1d343d7a4087 100644 --- a/roadmap/implementors-guide/src/runtime/initializer.md +++ b/roadmap/implementors-guide/src/runtime/initializer.md @@ -18,7 +18,7 @@ The other modules are initialized in this order: 1. Inclusion 1. Validity. -The [Configuration Module](/runtime/configuration.html) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module. +The [Configuration Module](configuration.html) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module. Set `HasInitialized` to true. diff --git a/roadmap/implementors-guide/src/runtime/scheduler.md b/roadmap/implementors-guide/src/runtime/scheduler.md index b365308743b1..7123aed8fb74 100644 --- a/roadmap/implementors-guide/src/runtime/scheduler.md +++ b/roadmap/implementors-guide/src/runtime/scheduler.md @@ -60,11 +60,11 @@ Availability Core Transitions within Block | Availability Timeout ``` -Validator group assignments do not need to change very quickly. The security benefits of fast rotation is redundant with the challenge mechanism in the [Validity module](/runtime/validity.html). Because of this, we only divide validators into groups at the beginning of the session and do not shuffle membership during the session. However, we do take steps to ensure that no particular validator group has dominance over a single parachain or parathread-multiplexer for an entire session to provide better guarantees of liveness. +Validator group assignments do not need to change very quickly. The security benefits of fast rotation is redundant with the challenge mechanism in the [Validity module](validity.html). Because of this, we only divide validators into groups at the beginning of the session and do not shuffle membership during the session. However, we do take steps to ensure that no particular validator group has dominance over a single parachain or parathread-multiplexer for an entire session to provide better guarantees of liveness. Validator groups rotate across availability cores in a round-robin fashion, with rotation occurring at fixed intervals. The i'th group will be assigned to the `(i+k)%n`'th core at any point in time, where `k` is the number of rotations that have occurred in the session, and `n` is the number of cores. This makes upcoming rotations within the same session predictable. -When a rotation occurs, validator groups are still responsible for distributing availability chunks for any previous cores that are still occupied and pending availability. In practice, rotation and availability-timeout frequencies should be set so this will only be the core they have just been rotated from. It is possible that a validator group is rotated onto a core which is currently occupied. In this case, the validator group will have nothing to do until the previously-assigned group finishes their availability work and frees the core or the availability process times out. Depending on if the core is for a parachain or parathread, a different timeout `t` from the [`HostConfiguration`](/type-definitions.html#host-configuration) will apply. Availability timeouts should only be triggered in the first `t-1` blocks after the beginning of a rotation. +When a rotation occurs, validator groups are still responsible for distributing availability chunks for any previous cores that are still occupied and pending availability. In practice, rotation and availability-timeout frequencies should be set so this will only be the core they have just been rotated from. It is possible that a validator group is rotated onto a core which is currently occupied. In this case, the validator group will have nothing to do until the previously-assigned group finishes their availability work and frees the core or the availability process times out. Depending on if the core is for a parachain or parathread, a different timeout `t` from the [`HostConfiguration`](../type-definitions.html#host-configuration) will apply. Availability timeouts should only be triggered in the first `t-1` blocks after the beginning of a rotation. Parathreads operate on a system of claims. Collators participate in auctions to stake a claim on authoring the next block of a parathread, although the auction mechanism is beyond the scope of the scheduler. The scheduler guarantees that they'll be given at least a certain number of attempts to author a candidate that is backed. Attempts that fail during the availability phase are not counted, since ensuring availability at that stage is the responsibility of the backing validators, not of the collator. When a claim is accepted, it is placed into a queue of claims, and each claim is assigned to a particular parathread-multiplexing core in advance. Given that the current assignments of validator groups to cores are known, and the upcoming assignments are predictable, it is possible for parathread collators to know who they should be talking to now and how they should begin establishing connections with as a fallback. @@ -147,13 +147,13 @@ Scheduled: Vec, // sorted ascending by CoreIndex. ## Session Change -Session changes are the only time that configuration can change, and the [Configuration module](/runtime/configuration.html)'s session-change logic is handled before this module's. We also lean on the behavior of the [Inclusion module](/runtime/inclusion.html) which clears all its occupied cores on session change. Thus we don't have to worry about cores being occupied across session boundaries and it is safe to re-size the `AvailabilityCores` bitfield. +Session changes are the only time that configuration can change, and the [Configuration module](configuration.html)'s session-change logic is handled before this module's. We also lean on the behavior of the [Inclusion module](inclusion.html) which clears all its occupied cores on session change. Thus we don't have to worry about cores being occupied across session boundaries and it is safe to re-size the `AvailabilityCores` bitfield. Actions: 1. Set `SessionStartBlock` to current block number. 1. Clear all `Some` members of `AvailabilityCores`. Return all parathread claims to queue with retries un-incremented. -1. Set `configuration = Configuration::configuration()` (see [`HostConfiguration`](/type-definitions.html#host-configuration)) +1. Set `configuration = Configuration::configuration()` (see [`HostConfiguration`](../type-definitions.html#host-configuration)) 1. Resize `AvailabilityCores` to have length `Paras::parachains().len() + configuration.parathread_cores with all`None` entries. 1. Compute new validator groups by shuffling using a secure randomness beacon - We need a total of `N = Paras::parathreads().len() + configuration.parathread_cores` validator groups. diff --git a/roadmap/implementors-guide/src/runtime/validity.md b/roadmap/implementors-guide/src/runtime/validity.md index 0418252f16c1..cd0c2163323a 100644 --- a/roadmap/implementors-guide/src/runtime/validity.md +++ b/roadmap/implementors-guide/src/runtime/validity.md @@ -53,7 +53,7 @@ The second type of remote dispute is the unconcluded dispute. An unconcluded rem When beginning a remote dispute, at least one escalation by a validator is required, but this validator may be malicious and desires to be slashed. There is no guarantee that the para is registered on this fork of the relay chain or that the para was considered available on any fork of the relay chain. -So the first step is to have the remote dispute proceed through an availability process similar to the one in the [Inclusion Module](/runtime/inclusion.html), but without worrying about core assignments or compactness in bitfields. +So the first step is to have the remote dispute proceed through an availability process similar to the one in the [Inclusion Module](inclusion.html), but without worrying about core assignments or compactness in bitfields. We assume that remote disputes are with respect to the same validator set as on the current fork, as BABE and GRANDPA assure that forks are never long enough to diverge in validator set. > TODO: this is at least directionally correct. handling disputes on other validator sets seems useless anyway as they wouldn't be bonded. diff --git a/roadmap/implementors-guide/src/type-definitions.md b/roadmap/implementors-guide/src/type-definitions.md index 6ab70d123a32..d8c90bba50d0 100644 --- a/roadmap/implementors-guide/src/type-definitions.md +++ b/roadmap/implementors-guide/src/type-definitions.md @@ -57,7 +57,7 @@ Either way, there will be some top-level type encapsulating messages from the ov ## Candidate Selection Message -These messages are sent from the overseer to the [Candidate Selection subsystem](/node/backing/candidate-selection.html) when new parablocks are available for validation. +These messages are sent from the overseer to the [Candidate Selection subsystem](node/backing/candidate-selection.html) when new parablocks are available for validation. ```rust enum CandidateSelectionMessage { @@ -88,7 +88,7 @@ enum CandidateBackingMessage { ## Validation Request Type -Various modules request that the [Candidate Validation subsystem](/node/utility/candidate-validation.html) validate a block with this message +Various modules request that the [Candidate Validation subsystem](node/utility/candidate-validation.html) validate a block with this message ```rust enum PoVOrigin { @@ -106,7 +106,7 @@ enum CandidateValidationMessage { ## Statement Type -The [Candidate Validation subsystem](/node/utility/candidate-validation.html) issues these messages in reponse to `ValidationRequest`s. The [Candidate Backing subsystem](/node/backing/candidate-backing.html) may upgrade the `Valid` variant to `Seconded`. +The [Candidate Validation subsystem](node/utility/candidate-validation.html) issues these messages in reponse to `ValidationRequest`s. The [Candidate Backing subsystem](node/backing/candidate-backing.html) may upgrade the `Valid` variant to `Seconded`. ```rust /// A statement about the validity of a parachain candidate.