Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Implementer's Guide: Flesh out more details for upward messages #1556

Merged
14 commits merged into from
Aug 14, 2020
21 changes: 10 additions & 11 deletions roadmap/implementers-guide/src/messaging.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,7 @@ digraph {
}
```

Downward Message Passing (DMP) is a mechanism for delivering messages to parachains from the relay chain. Downward
messages may originate not from the relay chain, but rather from another parachain via a mechanism
called HRMP (Will be described later).
Downward Message Passing (DMP) is a mechanism for delivering messages to parachains from the relay chain.

Each parachain has its own queue that stores all pending inbound downward messages. A parachain
doesn't have to process all messages at once, however, there are rules as to how the downward message queue
Expand All @@ -28,16 +26,17 @@ The downward message queue doesn't have a cap on its size and it is up to the re
that prevent spamming in place.

Upward Message Passing (UMP) is a mechanism responsible for delivering messages in the opposite direction:
from a parachain up to the relay chain. Upward messages are dispatched to Runtime entrypoints and
typically used for invoking some actions on the relay chain on behalf of the parachain.
from a parachain up to the relay chain. Upward messagess can serve different purposes and can be of different
pepyakin marked this conversation as resolved.
Show resolved Hide resolved
kinds.

> NOTE: It is conceivable that upward messages will be divided to more fine-grained kinds with a dispatchable upward message
being only one kind of multiple. That would make upward messages inspectable and therefore would allow imposing additional
validity criteria for the candidates that contain these messages.
One kind of messages is `Dispatchable`. They could be thought of similar to extrinsics sent to a relay chain: they also
pepyakin marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is the only kind right now, phrasing as "One kind, and the only current kind, of message..."

invoke exposed runtime entrypoints, they consume weight and require fees. The difference is that they originate from
a parachain. Each parachain has a queue of dispatchables to be executed. There can be only so many dispatchables at a time.
The weight that processing of the dispatchables can consume is limited by a preconfigured value. Therefore, it is possible
that some dispatchables will be left for later blocks. To make the dispatching more fair, the queues are processed turn-by-turn
in a round robin fashion.

Semantically, there is a queue of upward messages queue where messages from each parachain are stored. Each parachain
can have a limited number of messages and the total sum of pending messages is also limited. Each parachain can dispatch
multiple upward messages per candidate.
Other kinds of upward messages can be introduced in the future as well. Most likely the HRMP channel managem
pepyakin marked this conversation as resolved.
Show resolved Hide resolved

## Horizontal Message Passing

Expand Down
4 changes: 2 additions & 2 deletions roadmap/implementers-guide/src/runtime/inclusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID.
1. Check the collator's signature on the candidate data.
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup.
1. check that the upward messages, when combined with the existing queue size, are not exceeding `config.max_upward_queue_count` and `config.watermark_upward_queue_size` parameters.
1. call `Router::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid.
1. call `Router::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained.
1. call `Router::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark.
1. check that in the commitments of each candidate the horizontal messages are sorted by ascending recipient ParaId and there is no two horizontal messages have the same recipient.
Expand All @@ -79,7 +79,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
* `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`:
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`.
> TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it.
1. call `Router::queue_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Router::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Router::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment,
1. call `Router::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`.
1. call `Router::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,5 @@ Included: Option<()>,
1. Invoke `Scheduler::schedule(freed)`
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
1. Call the `Router::process_upward_dispatchables` routine to execute all messages in upward dispatch queues.
1. If all of the above succeeds, set `Included` to `Some(())`.
42 changes: 34 additions & 8 deletions roadmap/implementers-guide/src/runtime/router.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Storage layout:
/// Paras that are to be cleaned up at the end of the session.
/// The entries are sorted ascending by the para id.
OutgoingParas: Vec<ParaId>;
/// Messages ready to be dispatched onto the relay chain.
/// Messages ready to be dispatched onto the relay chain. The messages are processed in FIFO order.
pepyakin marked this conversation as resolved.
Show resolved Hide resolved
/// This is subject to `max_upward_queue_count` and
/// `watermark_queue_size` from `HostConfiguration`.
RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
Expand All @@ -20,6 +20,9 @@ RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
RelayDispatchQueueSize: map ParaId => (u32, u32);
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
NeedsDispatch: Vec<ParaId>;
/// This is the para that gets will get dispatched first during the next upward dispatchable queue
/// execution round.
NextDispatchRoundStartWith: Option<ParaId>;
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map ParaId => Vec<DownwardMessage>;
```
Expand Down Expand Up @@ -158,6 +161,14 @@ The following routines are intended to be invoked by paras' upward messages.

Candidate Acceptance Function:

* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`:
1. Checks that there are at most `config.max_upward_msg_num_per_candidate` messages.
1. Checks each upward message individually depending on its kind:
1. If the message kind is `Dispatchable`:
1. Check that `data` actually decodes to a valid entrypoint / dispatchable
1. Check that the entrypoint weight doesn't exceed `config.max_parachain_ump_dispatch_weight`
pepyakin marked this conversation as resolved.
Show resolved Hide resolved
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the message (NOTE that should include all processed
upward messages of the `Dispatchable` kind up to this point!)
* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`:
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.
Expand Down Expand Up @@ -191,14 +202,32 @@ Candidate Enactment:
* `prune_dmq(P: ParaId, processed_downward_messages)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.

* `queue_upward_messages(ParaId, Vec<UpwardMessage>)`:
1. Updates `NeedsDispatch`, and enqueues upward messages into `RelayDispatchQueue` and modifies the respective entry in `RelayDispatchQueueSize`.
* `enact_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process all upward messages in order depending on their kinds:
1. If the message kind is `Dispatchable`:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.

The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called.

`schedule_para_cleanup(ParaId)`:
1. Add the para into the `OutgoingParas` vector maintaining the sorted order.

The following routine is meant to execute pending entries in upward dispatchable queues. This function doesn't fail, even if
any of dispatchables return an error.

`process_upward_dispatchables()`:
1. Initialize a cumulative weight counter `T` to 0
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Peek the size of the first dispatchable `D` from `RelayDispatchQueues` for `P`
1. If `weight_of(D) + T > config.max_parachain_ump_dispatch_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
1. Dequeue `D`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. In case `D` doesn't deserialize as a proper entrypoint - drop it (this can happen if an upgrade took place after the acceptance check)
1. Execute `D` and add the actual amount of weight consumed to `T`.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`. If `NeedsDispatch` became empty then finish processing.

## Session Change

1. Drain `OutgoingParas`. For each `P` happened to be in the list:
Expand All @@ -207,8 +236,9 @@ The following routine is intended to be called in the same time when `Paras::sch
1. Remove all `DownwardMessageQueues` of `P`.
1. Remove `RelayDispatchQueueSize` of `P`.
1. Remove `RelayDispatchQueues` of `P`.
1. Remove `P` if it exists in `NeedsDispatch`.
1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None`
- Note that we don't remove the open/close requests since they are gon die out naturally.
TODO: What happens with the deposits in channels or open requests?
1. For each request `R` in `HrmpOpenChannelRequests`:
1. if `R.confirmed = false`:
1. increment `R.age` by 1.
Expand All @@ -234,7 +264,3 @@ To remove a channel `C` identified with a tuple `(sender, recipient)`:
1. Remove `C` from `HrmpChannelContents`.
1. Remove `recipient` from the set `HrmpEgressChannelsIndex` for `sender`.
1. Remove `sender` from the set `HrmpIngressChannelsIndex` for `recipient`.

## Finalization

1. Dispatch queued upward messages from `RelayDispatchQueues` in a FIFO order applying the `config.watermark_upward_queue_size` and `config.max_upward_queue_count` limits.
17 changes: 12 additions & 5 deletions roadmap/implementers-guide/src/types/messages.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,18 @@ enum ParachainDispatchOrigin {
Root,
}

struct UpwardMessage {
/// The origin for the message to be sent from.
pub origin: ParachainDispatchOrigin,
/// The message data.
pub data: Vec<u8>,
enum UpwardMessage {
/// This upward message is meant to be dispatched to the entrypoint specified.
Dispatchable {
/// The origin for the message to be sent from.
origin: ParachainDispatchOrigin,
/// An opaque byte buffer that encodes an enrtypoint and the arguments that should be
/// provided to it upon the dispatch.
data: Vec<u8>,
},
// Examples:
// HrmpOpenChannel { .. },
// HrmpCloseChannel { .. },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are these examples of?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of other potential kinds of messages, I can remove that if you think that's unnecessary given that those will become real messages

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be in favour of removing them, I find it unclear what usage they show.

}
```

Expand Down
7 changes: 6 additions & 1 deletion roadmap/implementers-guide/src/types/runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,12 @@ struct HostConfiguration {
/// Total size of messages allowed in the parachain -> relay-chain message queue before which
/// no further messages may be added to it. If it exceeds this then the queue may contain only
/// a single message.
pub watermark_upward_queue_size: u32,
pub max_upward_queue_size: u32,
/// Maximum amount of weight that we wish to devote on processing of the dispatchable upward
/// messages.
pub max_parachain_ump_dispatch_weight: u32,
/// The maximum number of messages that a candidate can contain.
pub max_upward_msg_num_per_candidate: u32,
/// Number of sessions after which an HRMP open channel request expires.
pub hrmp_open_request_ttl: u32,
/// The deposit that the sender should provide for opening an HRMP channel.
Expand Down