+
+Near Protocol is the scalable blockchain protocol.
+For the overview of the NEAR Protocol, read the following documents in numerical order.
+
+- Terminology
+- Data structures
+- Architecture
+- Chain specification
+- Runtime specification
+- Economics
+
+
+Standards such as Fungible Token Standard can be found in Standards page.
+
+
+
+A chain is a replication machinery, that provides for any type of state a way to replicate across the network and reach consensus on the state.
+
+
+
+
+NEAR Protocol has an account names system. Account ID is similar to a username. Account IDs have to follow the rules.
+
+
+- minimum length is 2
+- maximum length is 64
+- Account ID consists of Account ID parts separated by
.
+- Account ID part consists of lowercase alphanumeric symbols separated by either
_
or -
.
+
+Account names are similar to a domain names.
+Anyone can create a top level account (TLA) without separators, e.g. near
.
+Only near
can create alice.near
. And only alice.near
can create app.alice.near
and so on.
+Note, near
can NOT create app.alice.near
directly.
+Regex for a full account ID, without checking for length:
+^(([a-z\d]+[\-_])*[a-z\d]+\.)*([a-z\d]+[\-_])*[a-z\d]+$
+
+
+Name | Value |
+REGISTRAR_ACCOUNT_ID | registrar |
+MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH | 32 |
+
+Top level account names (TLAs) are very valuable as they provide root of trust and discoverability for companies, applications and users.
+To allow for fair access to them, the top level account names that are shorter than MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH
characters going to be auctioned off.
+Specifically, only REGISTRAR_ACCOUNT_ID
account can create new top level accounts that are shorter than MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH
characters. REGISTRAR_ACCOUNT_ID
implements standard Account Naming (link TODO) interface to allow create new accounts.
+def action_create_account(predecessor_id, account_id):
+ """Called on CreateAccount action in receipt."""
+ if len(account_id) < MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH and predecessor_id != REGISTRAR_ACCOUNT_ID:
+ raise CreateAccountOnlyByRegistrar(account_id, REGISTRAR_ACCOUNT_ID, predecessor_id)
+ # Otherwise, create account with given `account_id`.
+
+Note: we are not going to deploy registrar
auction at launch, instead allow to deploy it by Foundation after initial launch. The link to details of the auction will be added here in the next spec release post MainNet.
+
+Valid accounts:
+ok
+bowen
+ek-2
+ek.near
+com
+google.com
+bowen.google.com
+near
+illia.cheap-accounts.near
+max_99.near
+100
+near2019
+over.9000
+a.bro
+// Valid, but can't be created, because "a" is too short
+bro.a
+
+Invalid accounts:
+not ok // Whitespace characters are not allowed
+a // Too short
+100- // Suffix separator
+bo__wen // Two separators in a row
+_illia // Prefix separator
+.near // Prefix dot separator
+near. // Suffix dot separator
+a..near // Two dot separators in a row
+$$$ // Non alphanumeric characters are not allowed
+WAT // Non lowercase characters are not allowed
+me@google.com // @ is not allowed (it was allowed in the past)
+// TOO LONG:
+abcdefghijklmnopqrstuvwxyz.abcdefghijklmnopqrstuvwxyz.abcdefghijklmnopqrstuvwxyz
+
+
+Data for an single account is collocated in one shard. The account data consists of the following:
+
+
+Total account balance consists of unlocked balance and locked balance.
+Unlocked balance is tokens that the account can use for transaction fees, transfers staking and other operations.
+Locked balance is the tokens that are currently in use for staking to be a validator or to become a validator.
+Locked balance may become unlocked at the beginning of an epoch. See [Staking] for details.
+
+A contract (AKA smart contract) is a program in WebAssembly that belongs to a specific account.
+When account is created, it doesn't have a contract.
+A contract has to be explicitly deployed, either by the account owner, or during the account creation.
+A contract can be executed by anyone who calls a method on your account. A contract has access to the storage on your account.
+
+Every account has its own storage. It's a persistent key-value trie. Keys are ordered in lexicographical order.
+The storage can only be modified by the contract on the account.
+Current implementation on Runtime only allows your account's contract to read from the storage, but this might change in the future and other accounts's contracts will be able to read from your storage.
+NOTE: Accounts are charged recurrent rent for the total storage. This includes storage of the account itself, contract code, contract storage and all access keys.
+
+An access key grants an access to a account. Each access key on the account is identified by a unique public key.
+This public key is used to validate signature of transactions.
+Each access key contains a unique nonce to differentiate or order transactions signed with this access key.
+An access keys have a permission associated with it. The permission can be one of two types:
+
+- Full permission. It grants full access to the account.
+- Function call permission. It grants access to only issue function call transactions.
+
+See [Access Keys] for more details.
+
+Access key provides an access for a particular account. Each access key belongs to some account and
+is identified by a unique (within the account) public key. Access keys are stored as account_id,public_key
in a trie state. Account can have from zero to multiple access keys.
+
+#![allow(unused_variables)]
+fn main() {
+pub struct AccessKey {
+ /// The nonce for this access key.
+ /// NOTE: In some cases the access key needs to be recreated. If the new access key reuses the
+ /// same public key, the nonce of the new access key should be equal to the nonce of the old
+ /// access key. It's required to avoid replaying old transactions again.
+ pub nonce: Nonce,
+ /// Defines permissions for this access key.
+ pub permission: AccessKeyPermission,
+}
+}
+
+There are 2 types of AccessKeyPermission
in Near currently: FullAccess
and FunctionCall
. FunctionCall
grants a permission to issue any action on account like DeployContract, Transfer tokens to other account, call functions FunctionCall, Stake and even delete account DeleteAccountAction. FullAccess
also allow to manage access keys. AccessKeyPermission::FunctionCall
limits to do only contract calls.
+
+#![allow(unused_variables)]
+fn main() {
+pub enum AccessKeyPermission {
+ FunctionCall(FunctionCallPermission),
+ FullAccess,
+}
+}
+
+
+Grants limited permission to make FunctionCall to a specified receiver_id
and methods of a particular contract with a limit of allowed balance to spend.
+
+#![allow(unused_variables)]
+fn main() {
+pub struct FunctionCallPermission {
+ /// Allowance is a balance limit to use by this access key to pay for function call gas and
+ /// transaction fees. When this access key is used, both account balance and the allowance is
+ /// decreased by the same value.
+ /// `None` means unlimited allowance.
+ /// NOTE: To change or increase the allowance, the old access key needs to be deleted and a new
+ /// access key should be created.
+ pub allowance: Option<Balance>,
+
+ /// The access key only allows transactions with the given receiver's account id.
+ pub receiver_id: AccountId,
+
+ /// A list of method names that can be used. The access key only allows transactions with the
+ /// function call of one of the given method names.
+ /// Empty list means any method name can be used.
+ pub method_names: Vec<String>,
+}
+}
+
+
+If account has no access keys attached it means that it has no owner who can run transactions from its behalf. However, if such accounts has code it can be invoked by other accounts and contracts.
+
+
+Near node consists roughly of a blockchain layer and a runtime layer.
+These layers are designed to be independent from each other: the blockchain layer can in theory support runtime that processes
+transactions differently, has a different virtual machine (e.g. RISC-V), has different fees; on the other hand the runtime
+is oblivious to where the transactions are coming from. It is not aware whether the
+blockchain it runs on is sharded, what consensus it uses, and whether it runs as part of a blockchain at all.
+The blockchain layer and the runtime layer share the following components and invariants:
+
+Transactions and receipts are a fundamental concept in Near Protocol. Transactions represent actions requested by the
+blockchain user, e.g. send assets, create account, execute a method, etc. Receipts, on the other hand is an internal
+structure; think of a receipt as a message which is used inside a message-passing system.
+Transactions are created outside the Near Protocol node, by the user who sends them via RPC or network communication.
+Receipts are created by the runtime from transactions or as the result of processing other receipts.
+Blockchain layer cannot create or process transactions and receipts, it can only manipulate them by passing them
+around and feeding them to a runtime.
+
+Similar to Ethereum, Near Protocol is an account-based system. Which means that each blockchain user is roughly
+associated with one or several accounts (there are exceptions though, when users share an account and are separated
+through the access keys).
+The runtime is essentially a complex set of rules on what to do with accounts based on the information from the
+transactions and the receipts. It is therefore deeply aware of the concept of account.
+Blockchain layer however is mostly aware of the accounts through the trie (see below) and the validators (see below).
+Outside these two it does not operate on the accounts directly.
+
+Every account at NEAR belongs to some shard.
+All the information related to this account also belongs to the same shard. The information includes:
+
+- Balance
+- Locked balance (for staking)
+- Code of the contract
+- Key-value storage of the contract
+- All Access Keys
+
+Runtime assumes, it's the only information that is available for the contract execution.
+While other accounts may belong to the same shards, the Runtime never uses or provides them during contract execution.
+We can just assume that every account belongs to its own shard. So there is no reason to intentionally try to collocate accounts.
+
+Near Protocol is a stateful blockchain -- there is a state associated with each account and the user actions performed
+through transactions mutate that state. The state then is stored as a trie, and both the blockchain layer and the
+runtime layer are aware of this technical detail.
+The blockchain layer manipulates the trie directly. It partitions the trie between the shards to distribute the load.
+It synchronizes the trie between the nodes, and eventually it is responsible for maintaining the consistency of the trie
+between the nodes through its consensus mechanism and other game-theoretic methods.
+The runtime layer is also aware that the storage that it uses to perform the operations on is a trie. In general it does
+not have to know this technical detail and in theory we could have abstracted out the trie as a generic key-value storage.
+However, we allow some trie-specific operations that we expose to the smart contract developers so that they utilize
+Near Protocol to its maximum efficiency.
+
+Even though tokens is a fundamental concept of the blockchain, it is neatly encapsulated
+inside the runtime layer together with the gas, fees, and rewards.
+The only way the blockchain layer is aware of the tokens and the gas is through the computation of the exchange rate
+and the inflation which is based strictly on the block production mechanics.
+
+Both the blockchain layer and the runtime layer are aware of a special group of participants who are
+responsible for maintaining the integrity of the Near Protocol. These participants are associated with the
+accounts and are rewarded accordingly. The reward part is what the runtime layer is aware of, while everything
+around the orchestration of the validators is inside the blockchain layer.
+
+Interestingly, the following concepts are for the blockchain layer only and the runtime layer is not aware of them:
+
+- Sharding -- the runtime layer does not know that it is being used in a sharded blockchain, e.g. it does not know
+that the trie it works on is only a part of the overall blockchain state;
+- Blocks or chunks -- the runtime does not know that the receipts that it processes constitute a chunk and that the output
+receipts will be used in other chunks. From the runtime perspective it consumes and outputs batches of transactions and receipts;
+- Consensus -- the runtime does not know how consistency of the state is maintained;
+- Communication -- the runtime don't know anything about the current network topology. Receipt has only a receiver_id (a recipient account), but knows nothing about the destination shard, so it's a responsibility of a blockchain layer to route a particular receipt.
+
+
+
+- Fees and rewards -- fees and rewards are neatly encapsulated in the runtime layer. The blockchain layer, however
+has an indirect knowledge of them through the computation of the tokens-to-gas exchange rate and the inflation.
+
+
+TBD
+
+
+For the purpose of maintaining consensus, transactions are grouped into blocks. There is a single preconfigured block \(G\) called genesis block. Every block except \(G\) has a link pointing to the previous block \(prev(B)\), where \(B\) is the block, and \(G\) is reachable from every block by following those links (that is, there are no cycles).
+The links between blocks give rise to a partial order: for blocks \(A\) and \(B\), \(A < B\) means that \(A \ne B\) and \(A\) is reachable from \(B\) by following links to previous blocks, and \(A \le B\) means that \(A < B\) or \(A = B\). The relations \(>\) and \(\ge\) are defined as the reflected versions of \(<\) and \(\le\), respectively. Finally, \(A \sim B\) means that either \(A < B\), \(A = B\) or \(A > B\), and \(A \nsim B\) means the opposite.
+A chain \(chain(T)\) is a set of blocks reachable from block \(T\), which is called its tip. That is, \(chain(T) = \{B | B \le T\}\). For any blocks \(A\) and \(B\), there is a chain that both \(A\) and \(B\) belong to iff \(A \sim B\). In this case, \(A\) and \(B\) are said to be on the same chain.
+Each block has an integer height \(h(B)\). It is guaranteed that block heights are monotonic (that is, for any block \(B \ne G\), \(h(B) > h(prev(B))\)), but they need not be consecutive. Also, \(h(G)\) may not be zero. Each node keeps track of a valid block with the largest height it knows about, which is called its head.
+Blocks are grouped into epochs. In a chain, the set of blocks that belongs to some epoch forms a contiguous range: if blocks \(A\) and \(B\) such that \(A < B\) belong to the same epoch, then every block \(X\) such that \(A < X < B\) also belongs to that epoch. Epochs can be identified by sequential indices: \(G\) belongs to an epoch with index \(0\), and for every other block \(B\), the index of its epoch is either the same as that of \(prev(B)\), or one greater.
+Each epoch is associated with a set of block producers that are validating blocks in that epoch, as well as an assignment of block heights to block producers that are responsible for producing a block at that height. A block producer responsible for producing a block at height \(h\) is called block proposer at \(h\). This information (the set and the assignment) for an epoch with index \(i \ge 2\) is determined by the last block of the epoch with index \(i-2\). For epochs with indices \(0\) and \(1\), this information is preconfigured. Therefore, if two chains share the last block of some epoch, they will have the same set and the same assignment for the next two epochs, but not necessarily for any epoch after that.
+The consensus protocol defines a notion of finality. Informally, if a block \(B\) is final, any future final blocks may only be built on top of \(B\). Therefore, transactions in \(B\) and preceding blocks are never going to be reversed. Finality is not a function of a block itself, rather, a block may be final or not final in some chain it is a member of. Specifically, \(final(B, T)\), where \(B \le T\), means that \(B\) is final in \(chain(T)\). A block that is final in a chain is final in all of its extensions: specifically, if \(final(B, T)\) is true, then \(final(B, T')\) is also true for all \(T' \ge T\).
+
+The fields in the Block header relevant to the consensus process are:
+
+#![allow(unused_variables)]
+fn main() {
+struct BlockHeader {
+ ...
+ prev_hash: BlockHash,
+ height: BlockHeight,
+ epoch_id: EpochId,
+ last_final_block_hash: BlockHash,
+ approvals: Vec<Option<Signature>>
+ ...
+}
+}
+
+Block producers in the particular epoch exchange many kinds of messages. The two kinds that are relevant to the consensus are Blocks and Approvals. The approval contains the following fields:
+
+#![allow(unused_variables)]
+fn main() {
+enum ApprovalInner {
+ Endorsement(BlockHash),
+ Skip(BlockHeight),
+}
+
+struct Approval {
+ inner: ApprovalInner,
+ target_height: BlockHeight,
+ signature: Signature,
+ account_id: AccountId
+}
+}
+
+Where the parameter of the Endorsement
is the hash of the approved block, the parameter of the Skip
is the height of the approved block, target_height
is the specific height at which the approval can be used (an approval with a particular target_height
can be only included in the approvals
of a block that has height = target_height
), account_id
is the account of the block producer who created the approval, and signature
is their signature on the tuple (inner, target_height)
.
+
+Every block \(B\) except the genesis block must logically contain approvals of a form described in the next paragraph from block producers whose cumulative stake exceeds \(^2\!/_3\) of the total stake in the current epoch, and in specific conditions described in section epoch switches also the approvals of the same form from block producers whose cumulative stake exceeds \(^2\!/_3\) of the total stake in the next epoch.
+The approvals logically included in the block must be an Endorsement
with the hash of \(prev(B)\) if and only if \(h(B) = h(prev(B))+1\), otherwise it must be a Skip
with the height of \(prev(B)\). See this section below for details on why the endorsements must contain the hash of the previous block, and skips must contain the height.
+Note that since each approval that is logically stored in the block is the same for each block producer (except for the account_id
of the sender and the signature
), it is redundant to store the full approvals. Instead physically we only store the signatures of the approvals. The specific way they are stored is the following: we first fetch the ordered set of block producers from the current epoch. If the block is on the epoch boundary and also needs to include approvals from the next epoch (see epoch switches), we add new accounts from the new epoch
+def get_accounts_for_block_ordered(h, prev_block):
+ cur_epoch = get_next_block_epoch(prev_block)
+ next_epoch = get_next_block_next_epoch(prev_block)
+
+ account_ids = get_epoch_block_producers_ordered(cur_epoch)
+ if next_block_needs_approvals_from_next_epoch(prev_block):
+ for account_id in get_epoch_block_producers_ordered(next_epoch):
+ if account_id not in account_ids:
+ account_ids.append(account_id)
+
+ return account_ids
+
+The block then contains a vector of optional signatures of the same or smaller size than the resulting set of account_ids
, with each element being None
if the approval for such account is absent, or the signature on the approval message if it is present. It's easy to show that the actual approvals that were signed by the block producers can easily be reconstructed from the information available in the block, and thus the signatures can be verified. If the vector of signatures is shorter than the length of account_ids
, the remaining signatures are assumed to be None
.
+
+On receipt of the approval message the participant just stores it in the collection of approval messages.
+def on_approval(self, approval):
+ self.approvals.append(approval)
+
+Whenever a participant receives a block, the operations relevant to the consensus include updating the head
and initiating a timer to start sending the approvals on the block to the block producers at the consecutive target_height
s. The timer delays depend on the height of the last final block, so that information is also persisted.
+def on_block(self, block):
+ header = block.header
+
+ if header.height <= self.head_height:
+ return
+
+ last_final_block = store.get_block(header.last_final_block_hash)
+
+ self.head_height = header.height
+ self.head_hash = block.hash()
+ self.largest_final_height = last_final_block.height
+
+ self.timer_height = self.head_height + 1
+ self.timer_started = time.time()
+
+ self.endorsement_pending = True
+
+The timer needs to be checked periodically, and contain the following logic:
+def get_delay(n):
+ min(MAX_DELAY, MIN_DELAY + DELAY_STEP * (n-2))
+
+def process_timer(self):
+ now = time.time()
+
+ skip_delay = get_delay(self.timer_height - self.largest_final_height)
+
+ if self.endorsement_pending and now > self.timer_started + ENDORSEMENT_DELAY:
+
+ if self.head_height >= self.largest_target_height:
+ self.largest_target_height = self.head_height + 1
+ self.send_approval(head_height + 1)
+
+ self.endorsement_pending = False
+
+ if now > self.timer_started + skip_delay:
+ assert not self.endorsement_pending
+
+ self.largest_target_height = max(self.largest_target_height, self.timer_height + 1)
+ self.send_approval(self.timer_height + 1)
+
+ self.timer_started = now
+ self.timer_height += 1
+
+def send_approval(self, target_height):
+ if target_height == self.head_height + 1:
+ inner = Endorsement(self.head_hash)
+ else:
+ inner = Skip(self.head_height)
+
+ approval = Approval(inner, target_height)
+ send(approval, to_whom = get_block_proposer(self.head_hash, target_height))
+
+Where get_block_proposer
returns the next block proposer given the previous block and the height of the next block.
+It is also necessary that ENDORSEMENT_DELAY < MIN_DELAY
. Moreover, while not necessary for correctness, we require that ENDORSEMENT_DELAY * 2 <= MIN_DELAY
.
+
+We first define a convenience function to fetch approvals that can be included in a block at particular height:
+def get_approvals(self, target_height):
+ return [approval for approval
+ in self.approvals
+ if approval.target_height == target_height and
+ (isinstance(approval.inner, Skip) and approval.prev_height == self.head_height or
+ isinstance(approval.inner, Endorsement) and approval.prev_hash == self.head_hash)]
+
+A block producer assigned for a particular height produces a block at that height whenever they have get_approvals
return approvals from block producers whose stake collectively exceeds 2/3 of the total stake.
+
+A block \(B\) is final in \(chain(T)\), where \(T \ge B\), when either \(B = G\) or there is a block \(X \le T\) such that \(B = prev(prev(X))\) and \(h(X) = h(prev(X))+1 = h(B)+2\). That is, either \(B\) is the genesis block, or \(chain(T)\) includes at least two blocks on top of \(B\), and these three blocks (\(B\) and the two following blocks) have consecutive heights.
+
+There's a parameter \(epoch\_length \ge 3\) that defines the minimum length of an epoch. Suppose that a particular epoch \(e\_cur\) started at height \(h\), and say the next epoch will be \(e\_next\). Say \(BP(e)\) is a set of block producers in epoch \(e\). Say \(last\_final(T)\) is the highest final block in \(chain(T)\). The following are the rules of what blocks contain approvals from what block producers, and belong to what epoch.
+
+- Any block \(B\) with \(h(prev(B)) < h+epoch\_length-3\) is in the epoch \(e\_cur\) and must have approvals from more than \(^2\!/_3\) of \(BP(e\_cur)\) (stake-weighted).
+- Any block \(B\) with \(h(prev(B)) \ge h+epoch\_length-3\) for which \(h(last\_final(prev(B))) < h+epoch\_length-3\) is in the epoch \(e\_cur\) and must logically include approvals from both more than \(^2\!/_3\) of \(BP(e\_cur)\) and more than \(^2\!/_3\) of \(BP(e\_next)\) (both stake-weighted).
+- The first block \(B\) with \(h(last\_final(prev(B))) >= h+epoch\_length-3\) is in the epoch \(e\_next\) and must logically include approvals from more than \(^2\!/_3\) of \(BP(e\_next)\) (stake-weighted).
+
+(see the definition of logically including approvals in approval requirements)
+
+Note that with the implementation above a honest block producer can never produce two endorsements with the same prev_height
(call this condition conflicting endorsements), neither can they produce a skip message s
and an endorsement e
such that s.prev_height < e.prev_height and s.target_height >= e.target_height
(call this condition conflicting skip and endorsement).
+Theorem Suppose that there are blocks \(B_1\), \(B_2\), \(T_1\) and \(T_2\) such that \(B_1 \nsim B_2\), \(final(B_1, T_1)\) and \(final(B_2, T_2)\). Then, more than \(^1\!/_3\) of the block producer in some epoch must have signed either conflicting endorsements or conflicting skip and endorsement.
+Proof Without loss of generality, we can assume that these blocks are chosen such that their heights are smallest possible. Specifically, we can assume that \(h(T_1) = h(B_1)+2\) and \(h(T_2) = h(B_2)+2\). Also, letting \(B_c\) be the highest block that is an ancestor of both \(B_1\) and \(B_2\), we can assume that there is no block \(X\) such that \(final(X, T_1)\) and \(B_c < X < B_1\) or \(final(X, T_2)\) and \(B_c < X < B_2\).
+Lemma There is such an epoch \(E\) that all blocks \(X\) such that \(B_c < X \le T_1\) or \(B_c < X \le T_2\) include approvals from more than \(^2\!/_3\) of the block producers in \(E\).
+Proof There are two cases.
+Case 1: Blocks \(B_c\), \(T_1\) and \(T_2\) are all in the same epoch. Because the set of blocks in a given epoch in a given chain is a contiguous range, all blocks between them (specifically, all blocks \(X\) such that \(B_c < X < T_1\) or \(B_c < X < T_2\)) are also in the same epoch, so all those blocks include approvals from more than \(^2\!/_3\) of the block producers in that epoch.
+Case 2: Blocks \(B_c\), \(T_1\) and \(T_2\) are not all in the same epoch. Suppose that \(B_c\) and \(T_1\) are in different epochs. Let \(E\) be the epoch of \(T_1\) and \(E_p\) be the preceding epoch (\(T_1\) cannot be in the same epoch as the genesis block). Let \(R\) and \(S\) be the first and the last block of \(E_p\) in \(chain(T_1)\). Then, there must exist a block \(F\) in epoch \(E_p\) such that \(h(F)+2 = h(S) < h(T_1)\). Because \(h(F) < h(T_1)-2\), we have \(F < B_1\), and since there are no final blocks \(X\) such that \(B_c < X < B_1\), we conclude that \(F \le B_c\). Because there are no epochs between \(E\) and \(E_p\), we conclude that \(B_c\) is in epoch \(E_p\). Also, \(h(B_c) \ge h(F) \ge h(R)+epoch\_length-3\). Thus, any block after \(B_c\) and until the end of \(E\) must include approvals from more than \(^2\!/_3\) of the block producers in \(E\). Applying the same argument to \(chain(T_2)\), we can determine that \(T_2\) is either in \(E\) or \(E_p\), and in both cases all blocks \(X\) such that \(B_c < X \le T_2\) include approvals from more than \(^2\!/_3\) of block producers in \(E\) (the set of block producers in \(E\) is the same in \(chain(T_1)\) and \(chain(T_2)\) because the last block of the epoch preceding \(E_p\), if any, is before \(B_c\) and thus is shared by both chains). The case where \(B_c\) and \(T_1\) are in the same epoch, but \(B_c\) and \(T_2\) are in different epochs is handled similarly. Thus, the lemma is proven.
+Now back to the theorem. Without loss of generality, assume that \(h(B_1) \le h(B_2)\). On the one hand, if \(chain(T_2)\) doesn't include a block at height \(h(B_1)\), then the first block at height greater than \(h(B_1)\) must include skips from more than \(^2\!/_3\) of the block producers in \(E\) which conflict with endorsements in \(prev(T_1)\), therefore, more than \(^1\!/_3\) of the block producers in \(E\) must have signed conflicting skip and endorsement. Similarly, if \(chain(T_2)\) doesn't include a block at height \(h(B_1)+1\), more than \(^1\!/_3\) of the block producers in \(E\) signed both an endorsement in \(T_1\) and a skip in the first block in \(chain(T_2)\) at height greater than \(h(T_1)\). On the other hand, if \(chain(T_2)\) includes both a block at height \(h(B_1)\) and a block at height \(h(B_1)+1\), the latter must include endorsements for the former, which conflict with endorsements for \(B_1\). Therefore, more than \(^1\!/_3\) of the block producers in \(E\) must have signed conflicting endorsements. Thus, the theorem is proven.
+
+See the proof of liveness in near.ai/doomslug. The consensus in this section differs in that it requires two consecutive blocks with endorsements. The proof in the linked paper trivially extends, by observing that once the delay is sufficiently long for a honest block producer to collect enough endorsements, the next block producer ought to have enough time to collect all the endorsements too.
+
+The approval condition above
+
+Any valid block must logically include approvals from block producers whose cumulative stake exceeds 2/3 of the total stake in the epoch. For a block B
and its previous block B'
each approval in B
must be an Endorsement
with the hash of B'
if and only if B.height == B'.height + 1
, otherwise it must be a Skip
with the height of B'
+
+Is more complex that desired, and it is tempting to unify the two conditions. Unfortunately, they cannot be unified.
+It is critical that for endorsements each approval has the prev_hash
equal to the hash of the previous block, because otherwise the safety proof above doesn't work, in the second case the endorsements in B1
and Bx
can be the very same approvals.
+It is critical that for the skip messages we do not require the hashes in the approvals to match the hash of the previous block, because otherwise a malicious actor can create two blocks at the same height, and distribute them such that half of the block producers have one as their head, and the other half has the other. The two halves of the block producers will be sending skip messages with different prev_hash
but the same prev_height
to the future block producers, and if there's a requirement that the prev_hash
in the skip matches exactly the prev_hash
of the block, no block producer will be able to create their blocks.
+
+A client creates a transaction, computes the transaction hash and signs this hash to get a signed transaction.
+Now this signed transaction can be sent to a node.
+When a node receives a new signed transaction, it validates the transaction (if the node tracks the shard) and gossips it to the peers. Eventually, the valid transaction is added to a transaction pool.
+Every validating node has its own transaction pool. The transaction pool maintains transactions that were not yet discarded and not yet included into the chain.
+Before producing a chunk transactions are ordered and validated again. This is done to produce chunks with only valid transactions.
+
+The transaction pool groups transactions by a pair of (signer_id, signer_public_key)
.
+The signer_id
is the account ID of the user who signed the transaction, the signer_public_key
is the public key of the account's access key that was used to sign the transactions.
+Transactions within a group are not ordered.
+The valid order of the transactions in a chunk is the following:
+
+- transactions are ordered in batches.
+- within a batch all transactions keys should have different.
+- a set of transaction keys in each subsequent batch should be a sub-set of keys from the previous batch.
+- transactions with the same key should be ordered in strictly increasing order of their corresponding nonces.
+
+Note:
+
+- the order within a batch is undefined. Each node should use a unique secret seed for that ordering to users from finding the lowest keys to get advantage of every node.
+
+Transaction pool provides a draining structure that allows to pull transactions in a proper order.
+
+The transaction validation happens twice, once before adding to the transaction pool, next before adding to a chunk.
+
+This is done to quickly filter out transactions that have an invalid signature or are invalid on the latest state.
+
+A chunk producer has to create a chunk with valid and ordered transactions up to some limits.
+One limit is the maximum number of transactions, another is the total gas burnt for transactions.
+To order and filter transactions, chunk producer gets a pool iterator and passes it to the runtime adapter.
+The runtime adapter pulls transactions one by one.
+The valid transactions are added to the result, invalid transactions are discarded.
+Once the limit is reached, all the remaining transactions from the iterator are returned back to the pool.
+
+Pool Iterator is a trait that iterates over transaction groups until all transaction group are empty.
+Pool Iterator returns a mutable reference to a transaction group that implements a draining iterator.
+The draining iterator is like a normal iterator, but it removes the returned entity from the group.
+It pulls transactions from the group in order from the smallest nonce to largest.
+The pool iterator and draining iterators for transaction groups allow the runtime adapter to create proper order.
+For every transaction group, the runtime adapter keeps pulling transactions until the valid transaction is found.
+If the transaction group becomes empty, then it's skipped.
+The runtime adapter may implement the following code to pull all valid transactions:
+
+#![allow(unused_variables)]
+fn main() {
+let mut valid_transactions = vec![];
+let mut pool_iter = pool.pool_iterator();
+while let Some(group_iter) = pool_iter.next() {
+ while let Some(tx) = group_iter.next() {
+ if is_valid(tx) {
+ valid_transactions.push(tx);
+ break;
+ }
+ }
+}
+valid_transactions
+}
+
+
+Let's say:
+
+- account IDs as uppercase letters (
"A"
, "B"
, "C"
...)
+- public keys are lowercase letters (
"a"
, "b"
, "c"
...)
+- nonces are numbers (
1
, 2
, 3
...)
+
+A pool might have group of transactions in the hashmap:
+transactions: {
+ ("A", "a") -> [1, 3, 2, 1, 2]
+ ("B", "b") -> [13, 14]
+ ("C", "d") -> [7]
+ ("A", "c") -> [5, 2, 3]
+}
+
+There are 3 accounts ("A"
, "B"
, "C"
). Account "A"
used 2 public keys ("a"
, "c"
). Other accounts used 1 public key each.
+Transactions within each group may have repeated nonces while in the pool.
+That's because the pool doesn't filter transactions with the same nonce, only transactions with the same hash.
+For this example, let's say that transactions are valid if the nonce is even and strictly greater than the previous nonce for the same key.
+
+When .pool_iterator()
is called, a new PoolIteratorWrapper
is created and it holds the mutuable reference to the pool,
+so the pool can't be modified outside of this iterator. The wrapper looks like this:
+pool: {
+ transactions: {
+ ("A", "a") -> [1, 3, 2, 1, 2]
+ ("B", "b") -> [13, 14]
+ ("C", "d") -> [7]
+ ("A", "c") -> [5, 2, 3]
+ }
+}
+sorted_groups: [],
+
+sorted_groups
is a queue of sorted transaction groups that were already sorted and pulled from the pool.
+
+The first group to be selected is for key ("A", "a")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
+[1, 1, 2, 2, 3]
. Runtime adapter pulls 1
, then 1
, and then 2
. Both transactions with nonce 1
are invalid because of odd nonce.
+Transaction with nonce 2
is added to the list of valid transactions.
+The transaction group is dropped and the pool iterator wrapper becomes the following:
+pool: {
+ transactions: {
+ ("B", "b") -> [13, 14]
+ ("C", "d") -> [7]
+ ("A", "c") -> [5, 2, 3]
+ }
+}
+sorted_groups: [
+ ("A", "a") -> [2, 3]
+],
+
+
+The next group is for key ("B", "b")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
+[13, 14]
. Runtime adapter pulls 13
, then 14
. The transaction with nonce 13
is invalid because of odd nonce.
+Transaction with nonce 14
is added to the list of valid transactions.
+The transaction group is dropped, but it's empty, so the pool iterator drops it completely:
+pool: {
+ transactions: {
+ ("C", "d") -> [7]
+ ("A", "c") -> [5, 2, 3]
+ }
+}
+sorted_groups: [
+ ("A", "a") -> [2, 3]
+],
+
+
+The next group is for key ("C", "d")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
+[7]
. Runtime adapter pulls 7
. The transaction with nonce 7
is invalid because of odd nonce.
+No valid transactions is added for this group.
+The transaction group is dropped, it's empty, so the pool iterator drops it completely:
+pool: {
+ transactions: {
+ ("A", "c") -> [5, 2, 3]
+ }
+}
+sorted_groups: [
+ ("A", "a") -> [2, 3]
+],
+
+The next group is for key ("A", "c")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
+[2, 3, 5]
. Runtime adapter pulls 2
.
+It's a valid transaction, so it's added to the list of valid transactions.
+The transaction group is dropped, so the pool iterator drops it completely:
+pool: {
+ transactions: { }
+}
+sorted_groups: [
+ ("A", "a") -> [2, 3]
+ ("A", "c") -> [3, 5]
+],
+
+
+The next group is pulled not from the pool, but from the sorted_groups. The key is ("A", "a")
.
+It's already sorted, so the iterator returns the mutable reference. Nonces are:
+[2, 3]
. Runtime adapter pulls 2
, then pulls 3
.
+The transaction with nonce 2
is invalid, because we've already pulled a transaction #1 from this group and it had nonce 2
.
+The new nonce has to be larger than the previous nonce, so this transaction is invalid.
+The transaction with nonce 3
is invalid because of odd nonce.
+No valid transactions is added for this group.
+The transaction group is dropped, it's empty, so the pool iterator drops it completely:
+pool: {
+ transactions: { }
+}
+sorted_groups: [
+ ("A", "c") -> [3, 5]
+],
+
+The next group is for key ("A", "c")
, with nonces [3, 5]
.
+Runtime adapter pulls 3
, then pulls 5
. Both transactions are invalid, because the nonce is odd.
+No transactions are added.
+The transaction group is dropped, the pool iterator wrapper becomes empty:
+pool: {
+ transactions: { }
+}
+sorted_groups: [ ],
+
+When runtime adapter tries to pull the next group, the pool iterator returns None
, so the runtime adapter drops the iterator.
+
+If the iterator was not fully drained, but some transactions still remained. They would be reinserted back into the pool.
+
+Transactions that were pulled from the pool:
+// First batch
+("A", "a", 1),
+("A", "a", 1),
+("A", "a", 2),
+("B", "b", 13),
+("B", "b", 14),
+("C", "d", 7),
+("A", "c", 2),
+
+// Next batch
+("A", "a", 2),
+("A", "a", 3),
+("A", "c", 3),
+("A", "c", 5),
+
+The valid transactions are:
+("A", "a", 2),
+("B", "b", 14),
+("A", "c", 2),
+
+In total there were only 3 valid transactions, that resulted in one batch.
+
+Other validators need to check the order of transactions in the produced chunk.
+It can be done in linear time, using a greedy algorithm.
+To select a first batch we need to iterate over transactions one by one until we see a transaction
+with the key that we've already included in the first batch.
+This transaction belongs to the next batch.
+Now all transactions in the N+1 batch should have a corresponding transaction with the same key in the N batch.
+If there are no transaction with the same key in the N batch, then the order is invalid.
+We also enforce the order of the sequence of transactions for the same key, the nonces of them should be in strictly increasing order.
+Here is the algorithm that validates the order:
+
+#![allow(unused_variables)]
+fn main() {
+fn validate_order(txs: &Vec<Transaction>) -> bool {
+ let mut nonces: HashMap<Key, Nonce> = HashMap::new();
+ let mut batches: HashMap<Key, usize> = HashMap::new();
+ let mut current_batch = 1;
+
+ for tx in txs {
+ let key = tx.key();
+
+ // Verifying nonce
+ let nonce = tx.nonce();
+ if let Some(last_nonce) = nonces.get(key) {
+ if nonce <= last_nonce {
+ // Nonces should increase.
+ return false;
+ }
+ }
+ nonces.insert(key, nonce);
+
+ // Verifying batch
+ if let Some(last_batch) = batches.get(key) {
+ if last_batch == current_batch {
+ current_batch += 1;
+ } else if last_batch < current_batch - 1 {
+ // Was skipped this key in the previous batch
+ return false;
+ }
+ } else {
+ if current_batch > 1 {
+ // Not in first batch
+ return false;
+ }
+ }
+ batches.insert(key, batch);
+ }
+ true
+}
+}
+
+
+
+In this section we provide an explanation how the FunctionCall
action execution works, what are the inputs and what are the outputs. Suppose runtime received the following ActionReceipt:
+
+#![allow(unused_variables)]
+fn main() {
+ActionReceipt {
+ id: "A1",
+ signer_id: "alice",
+ signer_public_key: "6934...e248",
+ receiver_id: "dex",
+ predecessor_id: "alice",
+ input_data_ids: [],
+ output_data_receivers: [],
+ actions: [FunctionCall { gas: 100000, deposit: 100000u128, method_name: "exchange", args: "{arg1, arg2, ...}", ... }],
+ }
+}
+
+
+ActionReceipt.input_data_ids
must be satisfied before execution (see Receipt Matching). Each of ActionReceipt.input_data_ids
will be converted to the PromiseResult::Successful(Vec<u8>)
if data_id.data
is Some(Vec<u8>)
otherwise if data_id.data
is None
promise will be PromiseResult::Failed
.
+
+The FunctionCall
executes in the receiver_id
account environment.
+
+- a vector of Promise Results which can be accessed by a
promise_result
import PromisesAPI promise_result
)
+- the original Transaction
signer_id
, signer_public_key
data from the ActionReceipt (e.g. method_name
, args
, predecessor_id
, deposit
, prepaid_gas
(which is gas
in FunctionCall))
+- a general blockchain data (e.g.
block_index
, block_timestamp
)
+- read data from the account storage
+
+A full list of the data available for the contract can be found in Context API and Trie
+
+First of all, runtime does prepare the Wasm binary to be executed:
+
+- loads the contract code from the
receiver_id
account storage
+- deserializes and validates the
code
Wasm binary (see prepare::prepare_contract
)
+- injects the gas counting function
gas
which will charge gas on the beginning of the each code block
+- instantiates Bindings Spec with binary and calls the
FunctionCall.method_name
exported function
+
+During execution, VM does the following:
+
+- counts burnt gas on execution
+- counts used gas (which is
burnt gas
+ gas attached to the new created receipts)
+- counts how accounts storage usage increased by the call
+- collects logs produced by the contract
+- sets the return data
+- creates new receipts through PromisesAPI
+
+
+The output of the FunctionCall
:
+
+- storage updates - changes to the account trie storage which will be applied on a successful call
+burnt_gas
- irreversible amount of gas witch was spent on computations
+used_gas
- includes burnt_gas
and gas attached to the new ActionReceipt
s created during the method execution. In case of failure, created ActionReceipt
s not going to be sent thus account will pay only for burnt_gas
+balance
- unspent account balance (account balance could be spent on deposits of newly created FunctionCall
s or TransferAction
s to other contracts)
+storage_usage
- storage_usage after ActionReceipt application
+logs
- during contract execution, utf8/16 string log records could be created. Logs are not persistent currently.
+new_receipts
- new ActionReceipts
created during the execution. These receipts are going to be sent to the respective receiver_id
s (see Receipt Matching explanation)
+- result could be either
ReturnData::Value(Vec<u8>)
or ReturnData::ReceiptIndex(u64)
`
+
+
+If applied ActionReceipt
contains output_data_receivers
, runtime will create DataReceipt
for each of data_id
and receiver_id
and data
equals returned value. Eventually, these DataReceipt
will be delivered to the corresponding receivers.
+
+Successful result could not return any Value, but generates a bunch of new ActionReceipts instead. One example could be a callback. In this case, we assume the the new Receipt will send its Value Result to the output_data_receivers
of the current ActionReceipt
.
+
+A transaction in Near is a list of actions and additional information:
+
+#![allow(unused_variables)]
+fn main() {
+pub struct Transaction {
+ /// An account on which behalf transaction is signed
+ pub signer_id: AccountId,
+ /// An access key which was used to sign a transaction
+ pub public_key: PublicKey,
+ /// Nonce is used to determine order of transaction in the pool.
+ /// It increments for a combination of `signer_id` and `public_key`
+ pub nonce: Nonce,
+ /// Receiver account for this transaction. If
+ pub receiver_id: AccountId,
+ /// The hash of the block in the blockchain on top of which the given transaction is valid
+ pub block_hash: CryptoHash,
+ /// A list of actions to be applied
+ pub actions: Vec<Action>,
+}
+}
+
+
+SignedTransaction
is what the node receives from a wallet through JSON-RPC endpoint and then routed to the shard where receiver_id
account lives. Signature proves an ownership of the corresponding public_key
(which is an AccessKey for a particular account) as well as authenticity of the transaction itself.
+
+#![allow(unused_variables)]
+fn main() {
+pub struct SignedTransaction {
+ pub transaction: Transaction,
+ /// A signature of a hash of the Borsh-serialized Transaction
+ pub signature: Signature,
+}
+
+Take a look some scenarios how transaction can be applied.
+
+There are a several action types in Near:
+
+#![allow(unused_variables)]
+fn main() {
+pub enum Action {
+ CreateAccount(CreateAccountAction),
+ DeployContract(DeployContractAction),
+ FunctionCall(FunctionCallAction),
+ Transfer(TransferAction),
+ Stake(StakeAction),
+ AddKey(AddKeyAction),
+ DeleteKey(DeleteKeyAction),
+ DeleteAccount(DeleteAccountAction),
+}
+}
+
+Each transaction consists a list of actions to be performed on the receiver_id
side. Sometimes the singer_id
equals to receiver_id
. There is a set of action types when signer_id
and receiver_id
are required to be equal. Actions requires arguments and use data from the Transaction
itself.
+// TODO: how to introduce the concept of sender_id
+
+Requirements:
+
+- unique
tx.receiver_id
+public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+
+CreateAccountAction
doesn't take any additional arguments, it uses receiver_id
from Transaction. receiver_id
is an ID for an account to be created. Account ID should be valid and unique throughout the system.
+Outcome:
+
+- creates an account with
id
= receiver_id
+- sets Account
storage_usage
to account_cost
(genesis config)
+- sets Account
storage_paid_at
to the current block height
+
+NOTE: for the all subsequent actions in the transaction the signer_id
becomes receiver_id
until DeleteAccountAction. It allows to execute actions on behalf of the just created account.
+
+#![allow(unused_variables)]
+fn main() {
+pub struct CreateAccountAction {}
+}
+
+
+Requirements:
+
+tx.signer_id
to be equal to receiver_id
+tx.public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+
+Outcome:
+
+- sets a code for account
+
+
+#![allow(unused_variables)]
+fn main() {
+pub struct DeployContractAction {
+ pub code: Vec<u8>, // a valid WebAssembly code
+}
+}
+
+
+Requirements:
+
+tx.public_key
to be AccessKeyPermission::FullAccess
or AccessKeyPermission::FunctionCall
+
+Calls a method of a particular contract. See details.
+
+#![allow(unused_variables)]
+fn main() {
+pub struct FunctionCallAction {
+ /// Name of exported Wasm function
+ pub method_name: String,
+ /// Serialized arguments
+ pub args: Vec<u8>,
+ /// Prepaid gas (gas_limit) for a function call
+ pub gas: Gas,
+ /// Amount of tokens to transfer to a receiver_id
+ pub deposit: Balance,
+}
+}
+
+
+Requirements:
+
+tx.public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+
+Outcome:
+
+- transfers amount specified in
deposit
from tx.signer
to a tx.receiver_id
account
+
+
+#![allow(unused_variables)]
+fn main() {
+pub struct TransferAction {
+ /// Amount of tokens to transfer to a receiver_id
+ pub deposit: Balance,
+}
+}
+
+
+Requirements:
+
+tx.signer_id
to be equal to receiver_id
+tx.public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+
+
+#![allow(unused_variables)]
+fn main() {
+pub struct StakeAction {
+ // Amount of tokens to stake
+ pub stake: Balance,
+ // This public key is a public key of the validator node
+ pub public_key: PublicKey,
+}
+}
+
+Outcome:
+// TODO: cover staking
+
+
+Requirements:
+
+tx.signer_id
to be equal to receiver_id
+tx.public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+
+Associates an AccessKey with a public_key
provided.
+
+#![allow(unused_variables)]
+fn main() {
+pub struct AddKeyAction {
+ pub public_key: PublicKey,
+ pub access_key: AccessKey,
+}
+}
+
+
+Requirements:
+
+tx.signer_id
to be equal to receiver_id
+tx.public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+
+
+#![allow(unused_variables)]
+fn main() {
+pub struct DeleteKeyAction {
+ pub public_key: PublicKey,
+}
+}
+
+
+Requirements:
+
+tx.signer_id
to be equal to receiver_id
+tx.public_key
to be AccessKeyPermission::FullAccess
for the singer_id
+tx.account shouldn't have any locked balance
+
+
+#![allow(unused_variables)]
+fn main() {
+pub struct DeleteAccountAction {
+ /// The remaining account balance will be transferred to the AccountId below
+ pub beneficiary_id: AccountId,
+}
+}
+
+
+All cross-contract (we assume that each account lives in it's own shard) communication in Near happens trough Receipts.
+Receipts are stateful in a sense that they serve not only as messages between accounts but also can be stored in the account storage to await DataReceipts.
+Each receipt has a predecessor_id
(who sent it) and receiver_id
the current account.
+Receipts are one of 2 types: action receipts or data receipts.
+Data Receipts are receipts that contains some data for some ActionReceipt
with the same receiver_id
.
+Data Receipts has 2 fields: the unique data identifier data_id
and data
the received result.
+data
is an Option
field and it indicates whether the result was a success or a failure. If it's Some
, then it means
+the remote execution was successful and it contains the vector of bytes of the result.
+Each ActionReceipt
also contains fields related to data:
+
+input_data_ids
- a vector of input data with the data_id
s required for the execution of this receipt.
+output_data_receivers
- a vector of output data receivers. It indicates where to send outgoing data.
+Each DataReceiver
consists of data_id
and receiver_id
for routing.
+
+Before any action receipt is executed, all input data dependencies need to be satisfied.
+Which means all corresponding data receipts has to be received.
+If any of the data dependencies is missing, the action receipt is postponed until all missing data dependency arrives.
+Because Chain and Runtime guarantees that no receipts are missing, we can rely that every action receipt will be executed eventually (Receipt Matching explanation).
+Each Receipt
has the following fields:
+
+
+The account_id which issued a receipt.
+
+
+The destination account_id.
+
+
+An unique id for the receipt.
+
+
+There is a 2 types of Receipts in Near: ActionReceipt and DataReceipt. ActionReceipt is a request to apply Actions, while DataReceipt is a result of application of these actions.
+
+ActionReceipt
represents a request to apply actions on the receiver_id
side. It could be a derived as a result of a Transaction
execution or a another ActionReceipt
processing. ActionReceipt
consists the following fields:
+
+
+An account_id which signed the original transaction.
+
+
+An AccessKey which was used to sign the original transaction.
+
+
+Gas price is a gas price which was set in a block where original transaction has been applied.
+
+
+type
: [DataReceiver{ data_id: CryptoHash, receiver_id: AccountId }]
+
+If smart contract finishes its execution with some value (not Promise), runtime creates a [DataReceipt
]s for each of the output_data_receivers
.
+
+
+input_data_ids
are the receipt data dependencies. input_data_ids
correspond to DataReceipt.data_id
.
+
+
+
+DataReceipt represents a final result of some contract execution.
+
+
+An a unique DataReceipt identifier.
+
+
+An an associated data in bytes. None
indicates an error during execution.
+
+Receipts can be generated during the execution of the SignedTransaction (see example) or during application of some ActionReceipt
which contains FunctionCall
action. The result of the FunctionCall
could either
+
+Runtime doesn't expect that Receipts are coming in a particular order. Each Receipt is processed individually. The goal of the Receipt Matching
process is to match all ActionReceipt
s to the corresponding DataReceipt
s.
+
+For each incoming ActionReceipt
runtime checks whether we have all the DataReceipt
s (defined as ActionsReceipt.input_data_ids
) required for execution. If all the required DataReceipt
s are already in the storage, runtime can apply this ActionReceipt
immediately. Otherwise we save this receipt as a Postponed ActionReceipt. Also we save Pending DataReceipts Count and a link from pending DataReceipt
to the Postponed ActionReceipt
. Now runtime will wait all the missing DataReceipt
s to apply the Postponed ActionReceipt
.
+
+A Receipt which runtime stores until all the designated DataReceipt
s arrive.
+
+key
= account_id
,receipt_id
+value
= [u8]
+
+Where account_id
is Receipt.receiver_id
, receipt_id
is Receipt.receiver_id
and value is a serialized Receipt
(which type must be ActionReceipt).
+
+A counter which counts pending DataReceipt
s for a Postponed Receipt initially set to the length of missing input_data_ids
of the incoming ActionReceipt
. It's decrementing with every new received DataReceipt
:
+
+key
= account_id
,receipt_id
+value
= u32
+
+Where account_id
is AccountId, receipt_id
is CryptoHash and value is an integer.
+
+We index each pending DataReceipt
so when a new DataReceipt
arrives we can find to which Postponed Receipt it belongs.
+
+key
= account_id
,data_id
+value
= receipt_id
+
+
+
+First of all, runtime saves the incoming DataReceipt
to the storage as:
+
+key
= account_id
,data_id
+value
= [u8]
+
+Where account_id
is Receipt.receiver_id
, data_id
is DataReceipt.data_id
and value is a DataReceipt.data
(which is typically a serialized result of the call to a particular contract).
+Next, runtime checks if there is any Postponed ActionReceipt
awaits for this DataReceipt
by querying Pending DataReceipt
to the Postponed Receipt. If there is no postponed receipt_id
yet, we do nothing else. If there is a postponed receipt_id
, we do the following:
+
+If Pending DataReceipt Count
is now 0 that means all the Receipt.input_data_ids
are in storage and runtime can safely apply the Postponed Receipt and remove it from the store.
+
+Suppose runtime got the following ActionReceipt
:
+# Non-relevant fields are omitted.
+Receipt{
+ receiver_id: "alice",
+ receipt_id: "693406"
+ receipt: ActionReceipt {
+ input_data_ids: []
+ }
+}
+
+If execution return Result::Value
+Suppose runtime got the following ActionReceipt
(we use a python-like pseudo code):
+# Non-relevant fields are omitted.
+Receipt{
+ receiver_id: "alice",
+ receipt_id: "5e73d4"
+ receipt: ActionReceipt {
+ input_data_ids: ["e5fa44", "7448d8"]
+ }
+}
+
+We can't apply this receipt right away: there are missing DataReceipt'a with IDs: ["e5fa44", "7448d8"]. Runtime does the following:
+postponed_receipts["alice,5e73d4"] = borsh_serialize(
+ Receipt{
+ receiver_id: "alice",
+ receipt_id: "5e73d4"
+ receipt: ActionReceipt {
+ input_data_ids: ["e5fa44", "7448d8"]
+ }
+ }
+)
+pending_data_receipt_store["alice,e5fa44"] = "5e73d4"
+pending_data_receipt_store["alice,7448d8"] = "5e73d4"
+pending_data_receipt_count = 2
+
+Note: the subsequent Receipts could arrived in the current block or next, that's why we save Postponed ActionReceipt in the storage
+Then the first pending Pending DataReceipt
arrives:
+# Non-relevant fields are omitted.
+Receipt {
+ receiver_id: "alice",
+ receipt: DataReceipt {
+ data_id: "e5fa44",
+ data: "some data for alice",
+ }
+}
+
+data_receipts["alice,e5fa44"] = borsh_serialize(Receipt{
+ receiver_id: "alice",
+ receipt: DataReceipt {
+ data_id: "e5fa44",
+ data: "some data for alice",
+ }
+};
+pending_data_receipt_count["alice,5e73d4"] = 1`
+del pending_data_receipt_store["alice,e5fa44"]
+
+And finally the last Pending DataReceipt
arrives:
+# Non-relevant fields are omitted.
+Receipt{
+ receiver_id: "alice",
+ receipt: DataReceipt {
+ data_id: "7448d8",
+ data: "some more data for alice",
+ }
+}
+
+data_receipts["alice,7448d8"] = borsh_serialize(Receipt{
+ receiver_id: "alice",
+ receipt: DataReceipt {
+ data_id: "7448d8",
+ data: "some more data for alice",
+ }
+};
+postponed_receipt_id = pending_data_receipt_store["alice,5e73d4"]
+postponed_receipt = postponed_receipts[postponed_receipt_id]
+del postponed_receipts[postponed_receipt_id]
+del pending_data_receipt_count["alice,5e73d4"]
+del pending_data_receipt_store["alice,7448d8"]
+apply_receipt(postponed_receipt)
+
+
+In the following sections we go over the common scenarios that runtime takes care of.
+
+Suppose Alice wants to transfer 100 tokens to Bob.
+In this case we are talking about native Near Protocol tokens, oppose to user-defined tokens implemented through a smart contract.
+There are several way this can be done:
+
+- Direct transfer through a transaction containing transfer action;
+- Alice calling a smart contract that in turn creates a financial transaction towards Bob.
+
+In this section we are talking about the former simpler scenario.
+
+For this to work both Alice and Bob need to have accounts and an access to them through
+the full access keys.
+Suppose Alice has account alice_near
and Bob has account bob_near
. Also, some time in the past,
+each of them has created a public-secret key-pair, saved the secret key somewhere (e.g. in a wallet application)
+and created a full access key with the public key for the account.
+We also need to assume that both Alice and Bob has some number of tokens on their accounts. Alice needs >100 tokens on the account
+so that she could transfer 100 tokens to Bob, but also Alice and Bob need to have some tokens to pay for the rent of their account --
+which is essentially the cost of the storage occupied by the account in the Near Protocol network.
+
+To send the transaction neither Alice nor Bob need to run a node.
+However, Alice needs a way to create and sign a transaction structure.
+Suppose Alice uses near-shell or any other third-party tool for that.
+The tool then creates the following structure:
+Transaction {
+ signer_id: "alice_near",
+ public_key: "ed25519:32zVgoqtuyRuDvSMZjWQ774kK36UTwuGRZMmPsS6xpMy",
+ nonce: 57,
+ receiver_id: "bob_near",
+ block_hash: "CjNSmWXTWhC3EhRVtqLhRmWMTkRbU96wUACqxMtV1uGf",
+ actions: vec![
+ Action::Transfer(TransferAction {deposit: 100} )
+ ],
+}
+
+Which contains one token transfer action, the id of the account that signs this transaction (alice_near
)
+the account towards which this transaction is addressed (bob_near
). Alice also uses the public key
+associated with one of the full access keys of alice_near
account.
+Additionally, Alice uses the nonce which is unique value that allows Near Protocol to differentiate the transactions (in case there are several transfers coming in rapid
+succession) which should be strictly increasing with each transaction. Unlike in Ethereum, nonces are associated with access keys, oppose to
+the entire accounts, so several users using the same account through different access keys need not to worry about accidentally
+reusing each other's nonces.
+The block hash is used to calculate the transaction "freshness". It is used to make sure the transaction does
+not get lost (let's say somewhere in the network) and then arrive hours, days, or years later when it is not longer relevant
+or would be undesirable to execute. The transaction does not need to arrive at the specific block, instead it is required to
+arrive within certain number of blocks from the bock identified by the block_hash
(as of 2019-10-27 the constant is 10 blocks).
+Any transaction arriving outside this threshold is considered to be invalid.
+near-shell or other tool that Alice uses then signs this transaction, by: computing the hash of the transaction and signing it
+with the secret key, resulting in a SignedTransaction
object.
+
+To send the transaction, near-shell connects through the RPC to any Near Protocol node and submits it.
+If users wants to wait until the transaction is processed they can use send_tx_commit
JSONRPC method which waits for the
+transaction to appear in a block. Otherwise the user can use send_tx_async
.
+
+We skip the details on how the transaction arrives to be processed by the runtime, since it is a part of the blockchain layer
+discussion.
+We consider the moment where SignedTransaction
is getting passed to Runtime::apply
of the
+runtime
crate.
+Runtime::apply
immediately passes transaction to Runtime::process_transaction
+which in turn does the following:
+
+- Verifies that transaction is valid;
+- Applies initial reversible and irreversible charges to
alice_near
account;
+- Creates a receipt with the same set of actions directed towards
bob_near
.
+
+The first two items are performed inside Runtime::verify_and_charge_transaction
method.
+Specifically it does the following checks:
+
+- Verifies that
alice_near
and bob_near
are syntactically valid account ids;
+- Verifies that the signature of the transaction is correct based on the transaction hash and the attached public key;
+- Retrieves the latest state of the
alice_near
account, and simultaneously checks that it exists;
+- Retrieves the state of the access key of that
alice_near
used to sign the transaction;
+- Checks that transaction nonce is greater than the nonce of the latest transaction executed with that access key;
+- Checks whether the account that signed the transaction is the same as the account that receives it. In our case the sender (
alice_near
) and the receiver (bob_near
) are not the same. We apply different fees if receiver and sender is the same account;
+- Applies the storage rent to the
alice_near
account;
+- Computes how much gas we need to spend to convert this transaction to a receipt;
+- Computes how much balance we need to subtract from
alice_near
, in this case it is 100 tokens;
+- Deducts the tokens and the gas from
alice_near
balance, using the current gas price;
+- Checks whether after all these operations account has enough balance to passively pay for the rent for the next several blocks
+(an economical constant defined by Near Protocol). Otherwise account will be open for an immediate deletion, which we do not want;
+- Updates the
alice_near
account with the new balance and the used access key with the new nonce;
+- Computes how much reward should be paid to the validators from the burnt gas.
+
+If any of the above operations fail all of the changes will be reverted.
+
+The receipt created in the previous section will eventually arrive to a runtime on the shard that hosts bob_near
account.
+Again, it will be processed by Runtime::apply
which will immediately call Runtime::process_receipt
.
+It will check that this receipt does not have data dependencies (which is only the case of function calls) and will then call Runtime::apply_action_receipt
on TransferAction
.
+Runtime::apply_action_receipt
will perform the following checks:
+
+- Retrieves the state of
bob_near
account, if it still exists (it is possible that Bob has deleted his account concurrently with the transfer transaction);
+- Applies the rent to Bob's account;
+- Computes the cost of processing a receipt and a transfer action;
+- Checks if
bob_near
still exists and if it is deposits the transferred tokens;
+- Computes how much reward should be paid to the validators from the burnt gas.
+
+
+This guide assumes that you have read the Financial Transaction section.
+Suppose Alice is a calling a function reserve_trip(city: String, date: u64)
on a smart contract deployed to a travel_agency
+account which in turn calls reserve(date: u64)
on a smart contract deployed to a hotel_near
account and attaches
+a callback to method hotel_reservation_complete(date: u64)
on travel_agency
.
+
+
+It possible for Alice to call the travel_agency
in several different ways.
+In the simplest scenario Alice has an account alice_near
and she has a full access key.
+She then composes the following transaction that calls the travel_agency
:
+Transaction {
+ signer_id: "alice_near",
+ public_key: "ed25519:32zVgoqtuyRuDvSMZjWQ774kK36UTwuGRZMmPsS6xpMy",
+ nonce: 57,
+ receiver_id: "travel_agency",
+ block_hash: "CjNSmWXTWhC3EhRVtqLhRmWMTkRbU96wUACqxMtV1uGf",
+ actions: vec![
+ Action::FunctionCall(FunctionCallAction {
+ method_name: "reserve_trip",
+ args: "{\"city\": \"Venice\", \"date\": 20191201}",
+ gas: 1000000,
+ tokens: 100,
+ })
+ ],
+}
+
+Here the public key corresponds to the full access key of alice_near
account. All other fields in Transaction
were
+discussed in the Financial Transaction section. The FunctionCallAction
action describes how
+the contract should be called. The receiver_id
field in Transaction
already establishes what contract should be executed,
+FunctionCallAction
merely describes how it should be executed. Interestingly, the arguments is just a blob of bytes,
+it is up to the contract developer what serialization format they choose for their arguments. In this example, the contract
+developer has chosen to use JSON and so the tool that Alice uses to compose this transaction is expected to use JSON too
+to pass the arguments. gas
declares how much gas alice_near
has prepaid for dynamically calculated fees of the smart
+contract executions and other actions that this transaction may spawn. The tokens
is the amount of alice_near
attaches
+to be deposited to whatever smart contract that it is calling to. Notice, gas
and tokens
are in different units of
+measurement.
+Now, consider a slightly more complex scenario. In this scenario Alice uses a restricted access key to call the function.
+That is the permission of the access key is not AccessKeyPermission::FullAccess
but is instead: AccessKeyPermission::FunctionCall(FunctionCallPermission)
+where
+FunctionCallPermission {
+ allowance: Some(3000),
+ receiver_id: "travel_agency",
+ method_names: [ "reserve_trip", "cancel_trip" ]
+}
+
+This scenario might arise when someone Alice's parent has given them a restricted access to alice_near
account by
+creating an access key that can be used strictly for trip management.
+This access key allows up to 3000
tokens to be spent (which includes token transfers and payments for gas), it can
+be only used to call travel_agency
and it can be only used with the reserve_trip
and cancel_trip
methods.
+The way runtime treats this case is almost exactly the same as the previous one, with the only difference on how it verifies
+the signature of on the signed transaction, and that it also checks for allowance to not be exceeded.
+Finally, in the last scenario, Alice does not have an account (or the existence of alice_near
is irrelevant). However,
+alice has full or restricted access key directly on travel_agency
account. In that case signer_id == receiver_id
in the
+Transaction
object and runtime will convert transaction to the first receipt and apply that receipt in the same block.
+This section will focus on the first scenario, since the other two are the same with some minor differences.
+
+The process of converting transaction to receipt is very similar to the Financial Transaction
+with several key points to note:
+
+- Since Alice attaches 100 tokens to the function call, we subtract them from
alice_near
upon converting transaction to receipt,
+similar to the regular financial transaction;
+- Since we are attaching 1000000 prepaid gas, we will not only subtract the gas costs of processing the receipt from
alice_near
, but
+will also purchase 1000000 gas using the current gas price.
+
+
+The receipt created on the shard that hosts alice_near
will eventually arrive to the shard hosting travel_agency
account.
+It will be processed in Runtime::apply
which will check that receipt does not have data dependencies (which is the case because
+this function call is not a callback) and will call Runtime::apply_action_receipt
.
+At this point receipt processing is similar to receipt processing from the Financial Transaction
+section, with one difference that we will also call action_function_call
which will do the following:
+
+- Retrieve the Wasm code of the smart contract (either from the database or from the cache);
+- Initialize runtime context through
VMContext
and create RuntimeExt
which provides access to the trie when the smart contract
+call the storage API. Specifically "{\"city\": \"Venice\", \"date\": 20191201}"
arguments will be set in VMContext
.
+- Calls
near_vm_runner::run
which does the following:
+
+- Inject gas, stack, and other kinds of metering;
+- Verify that Wasm code does not use floats;
+- Checks that bindings API functions that the smart contract is trying to call are actually those provided by
near_vm_logic
;
+- Compiles Wasm code into the native binary;
+- Calls
reserve_trip
on the smart contract.
+
+- During the execution of the smart contract it will at some point call
promise_create
and promise_then
, which will
+call method on RuntimeExt
that will record that two promises were created and that the second one should
+wait on the first one. Specifically, promise_create
will call RuntimeExt::create_receipt(vec![], "hotel_near")
+returning 0
and then RuntimeExt::create_receipt(vec![0], "travel_agency")
;
+
+
+
+
+action_function_call
then collects receipts from VMContext
along with the execution result, logs, and information
+about used gas;
+apply_action_receipt
then goes over the collected receipts from each action and returns them at the end of Runtime::apply
together with
+other receipts.
+
+
+This receipt will have output_data_receivers
with one element corresponding to the receipt that calls hotel_reservation_complete
,
+which will tell the runtime that it should create DataReceipt
and send it towards travel_agency
once the execution of reserve(date: u64)
is complete.
+The rest of the smart contract execution is similar to the above.
+
+Upon receiving the hotel_reservation_complete
receipt the runtime will notice that its input_data_ids
is not empty
+which means that it cannot be executed until reserve
receipt is complete. It will store the receipt in the trie together
+with the counter of how many DataReceipt
it is waiting on.
+It will not call the Wasm smart contract at this point.
+
+Once the runtime receives the DataReceipt
it takes the receipt with hotel_reservation_complete
function call
+and executes it following the same execution steps as with the reserve_trip
receipt.
+
+Here is the high-level diagram of various runtime components, including some blockchain layer components.
+
+
+Runtime crate encapsulates the logic of how transactions and receipts should be handled. If it encounters
+a smart contract call within a transaction or a receipt it calls near-vm-runner
, for all other actions, like account
+creation, it processes them in-place.
+
+The main entry point of the Runtime
is method apply
.
+It applies new singed transactions and incoming receipts for some chunk/shard on top of
+given trie and the given state root.
+If the validator accounts update is provided, updates validators accounts.
+All new signed transactions should be valid and already verified by the chunk producer.
+If any transaction is invalid, the method returns an InvalidTxError
.
+In case of success, the method returns ApplyResult
that contains the new state root, trie changes,
+new outgoing receipts, stats for validators (e.g. total rent paid by all the affected accounts),
+execution outcomes.
+
+It takes the following arguments:
+
+trie: Arc<Trie>
- the trie that contains the latest state.
+root: CryptoHash
- the hash of the state root in the trie.
+validator_accounts_update: &Option<ValidatorAccountsUpdate>
- optional field that contains updates for validator accounts.
+It's provided at the beginning of the epoch or when someone is slashed.
+apply_state: &ApplyState
- contains block index and timestamp, epoch length, gas price and gas limit.
+prev_receipts: &[Receipt]
- the list of incoming receipts, from the previous block.
+transactions: &[SignedTransaction]
- the list of new signed transactions.
+
+
+The execution consists of the following stages:
+
+- Snapshot the initial state.
+- Apply validator accounts update, if available.
+- Convert new signed transactions into the receipts.
+- Process receipts.
+- Check that incoming and outgoing balances match.
+- Finalize trie update.
+- Return
ApplyResult
.
+
+
+Validator accounts are accounts that staked some tokens to become a validator.
+The validator accounts update usually happens when the current chunk is the first chunk of the epoch.
+It also happens when there is a challenge in the current block with one of the participants belong to the current shard.
+This update distributes validator rewards, return locked tokens and maybe slashes some accounts out of their stake.
+
+New signed transaction transactions are provided by the chunk producer in the chunk. These transactions should be ordered and already validated.
+Runtime does validation again for the following reasons:
+
+- to charge accounts for transactions fees, transfer balances, prepaid gas and account rents;
+- to create new receipts;
+- to compute burnt gas;
+- to validate transactions again, in case the chunk producer was malicious.
+
+If the transaction has the the same signer_id
and receiver_id
, then the new receipt is added to the list of new local receipts,
+otherwise it's added to the list of new outgoing receipts.
+
+Receipts are processed one by one in the following order:
+
+- Previously delayed receipts from the state.
+- New local receipts.
+- New incoming receipts.
+
+After each processed receipt, we compare total gas burnt (so far) with the gas limit.
+When the total gas burnt reaches or exceeds the gas limit, the processing stops.
+The remaining receipts are considered delayed and stored into the state.
+
+Delayed receipts are stored as a persistent queue in the state.
+Initially, the first unprocessed index and the next available index are initialized to 0.
+When a new delayed receipt is added, it's written under the next available index in to the state and the next available index is incremented by 1.
+When a delayed receipt is processed, it's read from the state using the first unprocessed index and the first unprocessed index is incremented.
+At the end of the receipt processing, the all remaining local and incoming receipts are considered to be delayed and stored to the state in their respective order.
+If during receipt processing, we've changed indices, then the delayed receipt indices are stored to the state as well.
+
+The receipt processing algorithm is the following:
+
+- Read indices from the state or initialize with zeros.
+- While the first unprocessed index is less than the next available index do the following
+
+- If the total burnt gas is at least the gas limit, break.
+- Read the receipt from the first unprocessed index.
+- Remove the receipt from the state.
+- Increment the first unprocessed index.
+- Process the receipt.
+- Add the new burnt gas to the total burnt gas.
+- Remember that the delayed queue indices has changed.
+
+
+- Process the new local receipts and then the new incoming receipts
+
+- If the total burnt gas is less then the gas limit:
+
+- Process the receipt.
+- Add the new burnt gas to the total burnt gas.
+
+
+- Else:
+
+- Store the receipt under the next available index.
+- Increment the next available index.
+- Remember that the delayed queue indices has changed.
+
+
+
+
+- If the delayed queue indices has changed, store the new indices to the state.
+
+
+Balance checker computes the total incoming balance and the total outgoing balance.
+The total incoming balance consists of the following:
+
+- Incoming validator rewards from validator accounts update.
+- Sum of the initial accounts balances for all affected accounts. We compute it using the snapshot of the initial state.
+- Incoming receipts balances. The prepaid fees and gas multiplied their gas prices with the attached balances from transfers and function calls.
+Refunds are considered to be free of charge for fees, but still has attached deposits.
+- Balances for the processed delayed receipts.
+- Initial balances for the postponed receipts. Postponed receipts are receipts from the previous blocks that were processed, but were not executed.
+They are action receipts with some expected incoming data. Usually for a callback on top of awaited promise.
+When the expected data arrives later than the action receipt, then the action receipt is postponed.
+Note, the data receipts are 0 cost, because they are completely prepaid when issued.
+
+The total outgoing balance consists of the following:
+
+- Sum of the final accounts balance for all affected accounts.
+- Outgoing receipts balances.
+- New delayed receipts. Local and incoming receipts that were not processed this time.
+- Final balances for the postponed receipts.
+- Total rent paid by all affected accounts.
+- Total new validator rewards. It's computed from total gas burnt rewards.
+- Total balance burnt. In case the balance is burnt for some reason (e.g. account was deleted during the refund), it's accounted there.
+- Total balance slashed. In case a validator is slashed for some reason, the balance is account here.
+
+When you sum up incoming balances and outgoing balances, they should match.
+If they don't match, we throw an error.
+
+This is the low-level interface available to the smart contracts, it consists of the functions that the host (represented by
+Wasmer inside near-vm-runner) exposes to the guest (the smart contract compiled to Wasm).
+Due to Wasm restrictions the methods operate only with primitive types, like u64
.
+Also for all functions in the bindings specification the following is true:
+
+- Method execution could result in
MemoryAccessViolation
error if one of the following happens:
+
+- The method causes host to read a piece of memory from the guest but it points outside the guest's memory;
+- The guest causes host to read from the register, but register id is invalid.
+
+
+
+Execution of a bindings function call result in an error being generated. This error causes execution of the smart contract
+to be terminated and the error message written into the logs of the transaction that caused the execution. Many bindings
+functions can throw specialized error messages, but there is also a list of error messages that can be thrown by almost
+any function:
+
+IntegerOverflow
-- happens when guest passes some data to the host but when host tries to apply arithmetic operation
+on it it causes overflow or underflow;
+GasExceeded
-- happens when operation performed by the guest causes more gas than the remaining prepaid gas;
+GasLimitExceeded
-- happens when the execution uses more gas than allowed by the global limit imposed in the economics
+config;
+StorageError
-- happens when method fails to do some operation on the trie.
+
+The following binding methods cannot be invoked in a view call:
+
+signer_account_id
+signer_account_pk
+predecessor_account_id
+attached_deposit
+prepaid_gas
+used_gas
+promise_create
+promise_then
+promise_and
+promise_batch_create
+promise_batch_then
+promise_batch_action_create_account
+promise_batch_action_deploy_account
+promise_batch_action_function_call
+promise_batch_action_transfer
+promise_batch_action_stake
+promise_batch_action_add_key_with_full_access
+promise_batch_action_add_key_with_function_call
+promise_batch_action_delete_key
+promise_batch_action_delete_account
+promise_results_count
+promise_result
+promise_return
+
+If they are invoked the smart contract execution will panic with ProhibitedInView(<method name>)
.
+
+Registers allow the host function to return the data into a buffer located inside the host oppose to the buffer
+located on the client. A special operation can be used to copy the content of the buffer into the host. Memory pointers
+can then be used to point either to the memory on the guest or the memory on the host, see below. Benefits:
+
+- We can have functions that return values that are not necessarily used, e.g. inserting key-value into a trie can
+also return the preempted old value, which might not be necessarily used. Previously, if we returned something we
+would have to pass the blob from host into the guest, even if it is not used;
+- We can pass blobs of data between host functions without going through the guest, e.g. we can remove the value
+from the storage and insert it into under a different key;
+- It makes API cleaner, because we don't need to pass
buffer_len
and buffer_ptr
as arguments to other functions;
+- It allows merging certain functions together, see
storage_iter_next
;
+- This is consistent with other APIs that were created for high performance, e.g. allegedly Ewasm have implemented
+SNARK-like computations in Wasm by exposing a bignum library through stack-like interface to the guest. The guest
+can manipulate then with the stack of 256-bit numbers that is located on the host.
+
+
+The registers can be used to pass the blobs between host functions. For any function that
+takes a pair of arguments *_len: u64, *_ptr: u64
this pair is pointing to a region of memory either on the guest or
+the host:
+
+- If
*_len != u64::MAX
it points to the memory on the guest;
+- If
*_len == u64::MAX
it points to the memory under the register *_ptr
on the host.
+
+For example:
+storage_write(u64::MAX, 0, u64::MAX, 1, 2)
-- insert key-value into storage, where key is read from register 0,
+value is read from register 1, and result is saved to register 2.
+Note, if some function takes register_id
then it means this function can copy some data into this register. If
+register_id == u64::MAX
then the copying does not happen. This allows some micro-optimizations in the future.
+Note, we allow multiple registers on the host, identified with u64
number. The guest does not have to use them in
+order and can for instance save some blob in register 5000
and another value in register 1
.
+
+
+#![allow(unused_variables)]
+fn main() {
+read_register(register_id: u64, ptr: u64)
+}
+
+Writes the entire content from the register register_id
into the memory of the guest starting with ptr
.
+
+
+- If the content extends outside the memory allocated to the guest. In Wasmer, it returns
MemoryAccessViolation
error message;
+- If
register_id
is pointing to unused register returns InvalidRegisterId
error message.
+
+
+
+- If the content of register extends outside the preallocated memory on the host side, or the pointer points to a
+wrong location this function will overwrite memory that it is not supposed to overwrite causing an undefined behavior.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+register_len(register_id: u64) -> u64
+}
+
+Returns the size of the blob stored in the given register.
+
+
+- If register is used, then returns the size, which can potentially be zero;
+- If register is not used, returns
u64::MAX
+
+
+Here we provide a specification of trie API. After this NEP is merged, the cases where our current implementation does
+not follow the specification are considered to be bugs that need to be fixed.
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_write(key_len: u64, key_ptr: u64, value_len: u64, value_ptr: u64, register_id: u64) -> u64
+}
+
+Writes key-value into storage.
+
+
+- If key is not in use it inserts the key-value pair and does not modify the register;
+- If key is in use it inserts the key-value and copies the old value into the
register_id
.
+
+
+
+- If key was not used returns
0
;
+- If key was used returns
1
.
+
+
+
+- If
key_len + key_ptr
or value_len + value_ptr
exceeds the memory container or points to an unused register it panics
+with MemoryAccessViolation
. (When we say that something panics with the given error we mean that we use Wasmer API to
+create this error and terminate the execution of VM. For mocks of the host that would only cause a non-name panic.)
+- If returning the preempted value into the registers exceed the memory container it panics with
MemoryAccessViolation
;
+
+
+
+External::storage_set
trait can return an error which is then converted to a generic non-descriptive
+StorageUpdateError
, here
+however the actual implementation does not return error at all, see;
+- Does not return into the registers.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_read(key_len: u64, key_ptr: u64, register_id: u64) -> u64
+}
+
+Reads the value stored under the given key.
+
+
+- If key is used copies the content of the value into the
register_id
, even if the content is zero bytes;
+- If key is not present then does not modify the register.
+
+
+
+- If key was not present returns
0
;
+- If key was present returns
1
.
+
+
+
+- If
key_len + key_ptr
exceeds the memory container or points to an unused register it panics with MemoryAccessViolation
;
+- If returning the preempted value into the registers exceed the memory container it panics with
MemoryAccessViolation
;
+
+
+
+- This function currently does not exist.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_remove(key_len: u64, key_ptr: u64, register_id: u64) -> u64
+}
+
+Removes the value stored under the given key.
+
+Very similar to storage_read
:
+
+- If key is used, removes the key-value from the trie and copies the content of the value into the
register_id
, even if the content is zero bytes.
+- If key is not present then does not modify the register.
+
+
+
+- If key was not present returns
0
;
+- If key was present returns
1
.
+
+
+
+- If
key_len + key_ptr
exceeds the memory container or points to an unused register it panics with MemoryAccessViolation
;
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+- If returning the preempted value into the registers exceed the memory container it panics with
MemoryAccessViolation
;
+
+
+
+- Does not return into the registers.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_has_key(key_len: u64, key_ptr: u64) -> u64
+}
+
+Checks if there is a key-value pair.
+
+
+- If key is used returns
1
, even if the value is zero bytes;
+- Otherwise returns
0
.
+
+
+
+- If
key_len + key_ptr
exceeds the memory container it panics with MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_iter_prefix(prefix_len: u64, prefix_ptr: u64) -> u64
+}
+
+DEPRECATED, calling it will result result in HostError::Deprecated
error.
+Creates an iterator object inside the host.
+Returns the identifier that uniquely differentiates the given iterator from other iterators that can be simultaneously
+created.
+
+
+- It iterates over the keys that have the provided prefix. The order of iteration is defined by the lexicographic
+order of the bytes in the keys. If there are no keys, it creates an empty iterator, see below on empty iterators;
+
+
+
+- If
prefix_len + prefix_ptr
exceeds the memory container it panics with MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_iter_range(start_len: u64, start_ptr: u64, end_len: u64, end_ptr: u64) -> u64
+}
+
+DEPRECATED, calling it will result result in HostError::Deprecated
error.
+Similarly to storage_iter_prefix
+creates an iterator object inside the host.
+
+Unless lexicographically start < end
, it creates an empty iterator.
+Iterates over all key-values such that keys are between start
and end
, where start
is inclusive and end
is exclusive.
+Note, this definition allows for start
or end
keys to not actually exist on the given trie.
+
+
+- If
start_len + start_ptr
or end_len + end_ptr
exceeds the memory container or points to an unused register it panics with MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_iter_next(iterator_id: u64, key_register_id: u64, value_register_id: u64) -> u64
+}
+
+DEPRECATED, calling it will result result in HostError::Deprecated
error.
+Advances iterator and saves the next key and value in the register.
+
+
+- If iterator is not empty (after calling next it points to a key-value), copies the key into
key_register_id
and value into value_register_id
and returns 1
;
+- If iterator is empty returns
0
.
+
+This allows us to iterate over the keys that have zero bytes stored in values.
+
+
+- If
key_register_id == value_register_id
panics with MemoryAccessViolation
;
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+- If
iterator_id
does not correspond to an existing iterator panics with InvalidIteratorId
+- If between the creation of the iterator and calling
storage_iter_next
any modification to storage was done through
+storage_write
or storage_remove
the iterator is invalidated and the error message is IteratorWasInvalidated
.
+
+
+
+- Not implemented, currently we have
storage_iter_next
and data_read
+ DATA_TYPE_STORAGE_ITER
that together fulfill
+the purpose, but have unspecified behavior.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_create(account_id_len: u64,
+ account_id_ptr: u64,
+ method_name_len: u64,
+ method_name_ptr: u64,
+ arguments_len: u64,
+ arguments_ptr: u64,
+ amount_ptr: u64,
+ gas: u64) -> u64
+}
+
+Creates a promise that will execute a method on account with given arguments and attaches the given amount.
+amount_ptr
point to slices of bytes representing u128
.
+
+
+- If
account_id_len + account_id_ptr
or method_name_len + method_name_ptr
or arguments_len + arguments_ptr
+or amount_ptr + 16
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- Index of the new promise that uniquely identifies it within the current execution of the method.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_then(promise_idx: u64,
+ account_id_len: u64,
+ account_id_ptr: u64,
+ method_name_len: u64,
+ method_name_ptr: u64,
+ arguments_len: u64,
+ arguments_ptr: u64,
+ amount_ptr: u64,
+ gas: u64) -> u64
+}
+
+Attaches the callback that is executed after promise pointed by promise_idx
is complete.
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If
account_id_len + account_id_ptr
or method_name_len + method_name_ptr
or arguments_len + arguments_ptr
+or amount_ptr + 16
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- Index of the new promise that uniquely identifies it within the current execution of the method.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_and(promise_idx_ptr: u64, promise_idx_count: u64) -> u64
+}
+
+Creates a new promise which completes when time all promises passed as arguments complete. Cannot be used with registers.
+promise_idx_ptr
points to an array of u64
elements, with promise_idx_count
denoting the number of elements.
+The array contains indices of promises that need to be waited on jointly.
+
+
+- If
promise_ids_ptr + 8 * promise_idx_count
extend outside the guest memory with MemoryAccessViolation
;
+- If any of the promises in the array do not correspond to existing promises panics with
InvalidPromiseIndex
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- Index of the new promise that uniquely identifies it within the current execution of the method.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_results_count() -> u64
+}
+
+If the current function is invoked by a callback we can access the execution results of the promises that
+caused the callback. This function returns the number of complete and incomplete callbacks.
+Note, we are only going to have incomplete callbacks once we have promise_or
combinator.
+
+
+- If there is only one callback
promise_results_count()
returns 1
;
+- If there are multiple callbacks (e.g. created through
promise_and
) promise_results_count()
returns their number.
+- If the function was called not through the callback
promise_results_count()
returns 0
.
+
+
+
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_result(result_idx: u64, register_id: u64) -> u64
+}
+
+If the current function is invoked by a callback we can access the execution results of the promises that
+caused the callback. This function returns the result in blob format and places it into the register.
+
+
+- If promise result is complete and successful copies its blob into the register;
+- If promise result is complete and failed or incomplete keeps register unused;
+
+
+
+- If promise result is not complete returns
0
;
+- If promise result is complete and successful returns
1
;
+- If promise result is complete and failed returns
2
.
+
+
+
+- If
result_idx
does not correspond to an existing result panics with InvalidResultIndex
.
+- If copying the blob exhausts the memory limit it panics with
MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- We currently have two separate functions to check for result completion and copy it.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_return(promise_idx: u64)
+}
+
+When promise promise_idx
finishes executing its result is considered to be the result of the current function.
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+
+
+
+- The current name
return_promise
is inconsistent with the naming convention of Promise API.
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_create(account_id_len: u64, account_id_ptr: u64) -> u64
+}
+
+Creates a new promise towards given account_id
without any actions attached to it.
+
+
+- If
account_id_len + account_id_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- Index of the new promise that uniquely identifies it within the current execution of the method.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_then(promise_idx: u64, account_id_len: u64, account_id_ptr: u64) -> u64
+}
+
+Attaches a new empty promise that is executed after promise pointed by promise_idx
is complete.
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If
account_id_len + account_id_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- Index of the new promise that uniquely identifies it within the current execution of the method.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_create_account(promise_idx: u64)
+}
+
+Appends CreateAccount
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R48
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_deploy_contract(promise_idx: u64, code_len: u64, code_ptr: u64)
+}
+
+Appends DeployContract
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R49
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If
code_len + code_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_function_call(promise_idx: u64,
+ method_name_len: u64,
+ method_name_ptr: u64,
+ arguments_len: u64,
+ arguments_ptr: u64,
+ amount_ptr: u64,
+ gas: u64)
+}
+
+Appends FunctionCall
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R50
+NOTE: Calling promise_batch_create
and then promise_batch_action_function_call
will produce the same promise as calling promise_create
directly.
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If
account_id_len + account_id_ptr
or method_name_len + method_name_ptr
or arguments_len + arguments_ptr
+or amount_ptr + 16
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_transfer(promise_idx: u64, amount_ptr: u64)
+}
+
+Appends Transfer
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R51
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If
amount_ptr + 16
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_stake(promise_idx: u64,
+ amount_ptr: u64,
+ bls_public_key_len: u64,
+ bls_public_key_ptr: u64)
+}
+
+Appends Stake
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R52
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If the given BLS public key is not a valid BLS public key (e.g. wrong length)
InvalidPublicKey
.
+- If
amount_ptr + 16
or bls_public_key_len + bls_public_key_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_add_key_with_full_access(promise_idx: u64,
+ public_key_len: u64,
+ public_key_ptr: u64,
+ nonce: u64)
+}
+
+Appends AddKey
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R54
+The access key will have FullAccess
permission, details: https://github.com/nearprotocol/NEPs/blob/master/text/0005-access-keys.md#guide-level-explanation
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If the given public key is not a valid public key (e.g. wrong length)
InvalidPublicKey
.
+- If
public_key_len + public_key_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_add_key_with_function_call(promise_idx: u64,
+ public_key_len: u64,
+ public_key_ptr: u64,
+ nonce: u64,
+ allowance_ptr: u64,
+ receiver_id_len: u64,
+ receiver_id_ptr: u64,
+ method_names_len: u64,
+ method_names_ptr: u64)
+}
+
+Appends AddKey
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-156752ec7d78e7b85b8c7de4a19cbd4R54
+The access key will have FunctionCall
permission, details: https://github.com/nearprotocol/NEPs/blob/master/text/0005-access-keys.md#guide-level-explanation
+
+- If the
allowance
value (not the pointer) is 0
, the allowance is set to None
(which means unlimited allowance). And positive value represents a Some(...)
allowance.
+- Given
method_names
is a utf-8
string with ,
used as a separator. The vm will split the given string into a vector of strings.
+
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If the given public key is not a valid public key (e.g. wrong length)
InvalidPublicKey
.
+- if
method_names
is not a valid utf-8
string, fails with BadUTF8
.
+- If
public_key_len + public_key_ptr
, allowance_ptr + 16
, receiver_id_len + receiver_id_ptr
or
+method_names_len + method_names_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_delete_key(promise_idx: u64,
+ public_key_len: u64,
+ public_key_ptr: u64)
+}
+
+Appends DeleteKey
action to the batch of actions for the given promise pointed by promise_idx
.
+Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R55
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If the given public key is not a valid public key (e.g. wrong length)
InvalidPublicKey
.
+- If
public_key_len + public_key_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+promise_batch_action_delete_account(promise_idx: u64,
+ beneficiary_id_len: u64,
+ beneficiary_id_ptr: u64)
+}
+
+Appends DeleteAccount
action to the batch of actions for the given promise pointed by promise_idx
.
+Action is used to delete an account. It can be performed on a newly created account, on your own account or an account with
+insufficient funds to pay rent. Takes beneficiary_id
to indicate where to send the remaining funds.
+
+
+- If
promise_idx
does not correspond to an existing promise panics with InvalidPromiseIndex
.
+- If the promise pointed by the
promise_idx
is an ephemeral promise created by promise_and
.
+- If
beneficiary_id_len + beneficiary_id_ptr
points outside the memory of the guest or host, with MemoryAccessViolation
.
+- If called in a view function panics with
ProhibitedInView
.
+
+
+Context API mostly provides read-only functions that access current information about the blockchain, the accounts
+(that originally initiated the chain of cross-contract calls, the immediate contract that called the current one, the account of the current contract),
+other important information like storage usage.
+Many of the below functions are currently implemented through data_read
which allows to read generic context data.
+However, there is no reason to have data_read
instead of the specific functions:
+
+data_read
does not solve forward compatibility. If later we want to add another context function, e.g. executed_operations
+we can just declare it as a new function, instead of encoding it as DATA_TYPE_EXECUTED_OPERATIONS = 42
which is passed
+as the first argument to data_read
;
+data_read
does not help with renaming. If later we decide to rename signer_account_id
to originator_id
then one could
+argue that contracts that rely on data_read
would not break, while contracts relying on signer_account_id()
would. However
+the name change often means the change of the semantics, which means the contracts using this function are no longer safe to
+execute anyway.
+
+However there is one reason to not have data_read
-- it makes API
more human-like which is a general direction Wasm APIs, like WASI are moving towards to.
+
+
+#![allow(unused_variables)]
+fn main() {
+current_account_id(register_id: u64)
+}
+
+Saves the account id of the current contract that we execute into the register.
+
+
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+signer_account_id(register_id: u64)
+}
+
+All contract calls are a result of some transaction that was signed by some account using
+some access key and submitted into a memory pool (either through the wallet using RPC or by a node itself). This function returns the id of that account.
+
+
+- Saves the bytes of the signer account id into the register.
+
+
+
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+- Currently we conflate
originator_id
and sender_id
in our code base.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+signer_account_pk(register_id: u64)
+}
+
+Saves the public key fo the access key that was used by the signer into the register.
+In rare situations smart contract might want to know the exact access key that was used to send the original transaction,
+e.g. to increase the allowance or manipulate with the public key.
+
+
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+
+
+#![allow(unused_variables)]
+fn main() {
+predecessor_account_id(register_id: u64)
+}
+
+All contract calls are a result of a receipt, this receipt might be created by a transaction
+that does function invocation on the contract or another contract as a result of cross-contract call.
+
+
+- Saves the bytes of the predecessor account id into the register.
+
+
+
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+
+
+#![allow(unused_variables)]
+fn main() {
+input(register_id: u64)
+}
+
+Reads input to the contract call into the register. Input is expected to be in JSON-format.
+
+
+- If input is provided saves the bytes (potentially zero) of input into register.
+- If input is not provided does not modify the register.
+
+
+
+- If input was not provided returns
0
;
+- If input was provided returns
1
; If input is zero bytes returns 1
, too.
+
+
+
+- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
+
+
+
+- Implemented as part of
data_read
. However there is no reason to have one unified function, like data_read
that can
+be used to read all
+
+
+
+#![allow(unused_variables)]
+fn main() {
+block_index() -> u64
+}
+
+Returns the current block height from genesis.
+
+
+#![allow(unused_variables)]
+fn main() {
+block_timestamp() -> u64
+}
+
+Returns the current block timestamp (number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC).
+
+
+#![allow(unused_variables)]
+fn main() {
+epoch_height() -> u64
+}
+
+Returns the current epoch height from genesis.
+
+
+#![allow(unused_variables)]
+fn main() {
+storage_usage() -> u64
+}
+
+Returns the number of bytes used by the contract if it was saved to the trie as of the
+invocation. This includes:
+
+- The data written with
storage_*
functions during current and previous execution;
+- The bytes needed to store the account protobuf and the access keys of the given account.
+
+
+Accounts own certain balance; and each transaction and each receipt have certain amount of balance and prepaid gas
+attached to them.
+During the contract execution, the contract has access to the following u128
values:
+
+account_balance
-- the balance attached to the given account. This includes the attached_deposit
that was attached
+to the transaction;
+attached_deposit
-- the balance that was attached to the call that will be immediately deposited before
+the contract execution starts;
+prepaid_gas
-- the tokens attached to the call that can be used to pay for the gas;
+used_gas
-- the gas that was already burnt during the contract execution and attached to promises (cannot exceed prepaid_gas
);
+
+If contract execution fails prepaid_gas - used_gas
is refunded back to signer_account_id
and attached_deposit
+is refunded back to predecessor_account_id
.
+The following spec is the same for all functions:
+
+#![allow(unused_variables)]
+fn main() {
+account_balance(balance_ptr: u64)
+attached_deposit(balance_ptr: u64)
+
+}
+
+-- writes the value into the u128
variable pointed by balance_ptr
.
+
+
+- If
balance_ptr + 16
points outside the memory of the guest with MemoryAccessViolation
;
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+
+
+#![allow(unused_variables)]
+fn main() {
+prepaid_gas() -> u64
+used_gas() -> u64
+}
+
+
+
+- If called in a view function panics with
ProhibitedInView
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+random_seed(register_id: u64)
+}
+
+Returns random seed that can be used for pseudo-random number generation in deterministic way.
+
+
+- If the size of the registers exceed the set limit
MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+sha256(value_len: u64, value_ptr: u64, register_id: u64)
+}
+
+Hashes the random sequence of bytes using sha256 and returns it into register_id
.
+
+
+- If
value_len + value_ptr
points outside the memory or the registers use more memory than the limit with MemoryAccessViolation
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+keccak256(value_len: u64, value_ptr: u64, register_id: u64)
+}
+
+Hashes the random sequence of bytes using keccak256 and returns it into register_id
.
+
+
+- If
value_len + value_ptr
points outside the memory or the registers use more memory than the limit with MemoryAccessViolation
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+keccak512(value_len: u64, value_ptr: u64, register_id: u64)
+}
+
+Hashes the random sequence of bytes using keccak512 and returns it into register_id
.
+
+
+- If
value_len + value_ptr
points outside the memory or the registers use more memory than the limit with MemoryAccessViolation
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+value_return(value_len: u64, value_ptr: u64)
+}
+
+Sets the blob of data as the return value of the contract.
+
+
+- If
value_len + value_ptr
exceeds the memory container or points to an unused register it panics with MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+panic()
+}
+
+Terminates the execution of the program with panic GuestPanic("explicit guest panic")
.
+
+
+#![allow(unused_variables)]
+fn main() {
+panic_utf8(len: u64, ptr: u64)
+}
+
+Terminates the execution of the program with panic GuestPanic(s)
, where s
is the given UTF-8 encoded string.
+
+If len == u64::MAX
then treats the string as null-terminated with character '\0'
;
+
+
+- If string extends outside the memory of the guest with
MemoryAccessViolation
;
+- If string is not UTF-8 returns
BadUtf8
.
+- If string length without null-termination symbol is larger than
config.max_log_len
returns BadUtf8
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+log_utf8(len: u64, ptr: u64)
+}
+
+Logs the UTF-8 encoded string.
+
+If len == u64::MAX
then treats the string as null-terminated with character '\0'
;
+
+
+- If string extends outside the memory of the guest with
MemoryAccessViolation
;
+- If string is not UTF-8 returns
BadUtf8
.
+- If string length without null-termination symbol is larger than
config.max_log_len
returns BadUtf8
.
+
+
+
+#![allow(unused_variables)]
+fn main() {
+log_utf16(len: u64, ptr: u64)
+}
+
+Logs the UTF-16 encoded string. len
is the number of bytes in the string.
+See https://stackoverflow.com/a/5923961 that explains that null termination is not defined through encoding.
+
+If len == u64::MAX
then treats the string as null-terminated with two-byte sequence of 0x00 0x00
.
+
+
+- If string extends outside the memory of the guest with
MemoryAccessViolation
;
+
+
+
+#![allow(unused_variables)]
+fn main() {
+abort(msg_ptr: u32, filename_ptr: u32, line: u32, col: u32)
+}
+
+Special import kept for compatibility with AssemblyScript contracts. Not called by smart contracts directly, but instead
+called by the code generated by AssemblyScript.
+
+In the future we can have some of the registers to be on the guest.
+For instance a guest can tell the host that it has some pre-allocated memory that it wants to be used for the register,
+e.g.
+
+#![allow(unused_variables)]
+fn main() {
+set_guest_register(register_id: u64, register_ptr: u64, max_register_size: u64)
+}
+
+will assign register_id
to a span of memory on the guest. Host then would also know the size of that buffer on guest
+and can throw a panic if there is an attempted copying that exceeds the guest register size.
+
+
+type: u32
+Protocol version that this genesis works with.
+
+type: DateTime
+Official time of blockchain start.
+
+type: String
+ID of the blockchain. This must be unique for every blockchain.
+If your testnet blockchains do not have unique chain IDs, you will have a bad time.
+
+type: u32
+Number of block producer seats at genesis.
+
+type: [ValidatorId]
+Defines number of shards and number of validators per each shard at genesis.
+
+type: [ValidatorId]
+Expected number of fisherman per shard.
+
+type: bool
+Enable dynamic re-sharding.
+
+type: BlockIndex,
+Epoch length counted in blocks.
+
+type: Gas,
+Initial gas limit for a block
+
+type: Balance,
+Initial gas price
+
+type: u8
+Criterion for kicking out block producers (this is a number between 0 and 100)
+
+type: u8
+Criterion for kicking out chunk producers (this is a number between 0 and 100)
+
+type: Fraction
+Gas price adjustment rate
+
+type: RuntimeConfig
+Runtime configuration (mostly economics constants).
+
+type: [AccountInfo]
+List of initial validators.
+
+type: Vec<StateRecord>
+Records in storage at genesis (get split into shards at genesis creation).
+
+type: u64
+Number of blocks for which a given transaction is valid
+
+type: Fraction
+Developer reward percentage.
+
+type: Fraction
+Protocol treasury percentage.
+
+type: Fraction
+Maximum inflation on the total supply every epoch.
+
+type: Balance
+Total supply of tokens at genesis.
+
+type: u64
+Expected number of blocks per year
+
+type: AccountId
+Protocol treasury account
+
+
+For the specific economic specs, refer to Economics Section.
+
+
+The structure that holds the parameters of the runtime, mostly economics.
+
+type: Balance
+The cost to store one byte of storage per block.
+
+type: Balance
+Costs of different actions that need to be performed when sending and processing transaction
+and receipts.
+
+type: BlockIndex
+The minimum number of blocks of storage rent an account has to maintain to prevent forced deletion.
+
+type: RuntimeFeesConfig
+Costs of different actions that need to be performed when sending and processing transaction and receipts.
+
+type: VMConfig,
+Config of wasm operations.
+
+type: Balance
+The baseline cost to store account_id of short length per block.
+The original formula in NEP#0006 is 1,000 / (3 ^ (account_id.length - 2))
for cost per year.
+This value represents 1,000
above adjusted to use per block
+
+Economic parameters for runtime
+
+type: Fee
+Describes the cost of creating an action receipt, ActionReceipt
, excluding the actual cost
+of actions.
+
+type: DataReceiptCreationConfig
+Describes the cost of creating a data receipt, DataReceipt
.
+
+type: ActionCreationConfig
+Describes the cost of creating a certain action, Action
. Includes all variants.
+
+type: StorageUsageConfig
+Describes fees for storage rent
+
+type: Fraction
+Fraction of the burnt gas to reward to the contract account for execution.
+
+Describes the cost of creating an access key.
+
+type: Fee
+Base cost of creating a full access access-key.
+
+type: Fee
+Base cost of creating an access-key restricted to specific functions.
+
+type: Fee
+Cost per byte of method_names of creating a restricted access-key.
+
+Describes the cost of creating a specific action, Action
. Includes all variants.
+
+type: Fee
+Base cost of creating an account.
+
+type: Fee
+Base cost of deploying a contract.
+
+type: Fee
+Cost per byte of deploying a contract.
+
+type: Fee
+Base cost of calling a function.
+
+type: Fee
+Cost per byte of method name and arguments of calling a function.
+
+type: Fee
+Base cost of making a transfer.
+
+type: Fee
+Base cost of staking.
+
+type: AccessKeyCreationConfig
+Base cost of adding a key.
+
+type: Fee
+Base cost of deleting a key.
+
+type: Fee
+Base cost of deleting an account.
+
+Describes the cost of creating a data receipt, DataReceipt
.
+
+type: Fee
+Base cost of creating a data receipt.
+
+type: Fee
+Additional cost per byte sent.
+
+Describes cost of storage per block
+
+type: Gas
+Base storage usage for an account
+
+type: Gas
+Base cost for a k/v record
+
+type: Gas
+Cost per byte of key
+
+type: Gas
+Cost per byte of value
+
+type: Gas
+Cost per byte of contract code
+
+Costs associated with an object that can only be sent over the network (and executed by the receiver).
+
+Fee for sending an object from the sender to itself, guaranteeing that it does not leave
+
+Fee for sending an object potentially across the shards.
+
+Fee for executing the object.
+
+
+type: u64
+
+type: u64
+
+Config of wasm operations.
+
+type: ExtCostsConfig
+Costs for runtime externals
+
+type: u32
+Gas cost of a growing memory by single page.
+
+type: u32
+Gas cost of a regular operation.
+
+type: Gas
+Max amount of gas that can be used, excluding gas attached to promises.
+
+type: u32
+How tall the stack is allowed to grow?
+
+type: u32
+
+type: u32
+The initial number of memory pages.
+What is the maximal memory pages amount is allowed to have for
+a contract.
+
+type: u64
+Limit of memory used by registers.
+
+type: u64
+Maximum number of bytes that can be stored in a single register.
+
+type: u64
+Maximum number of registers that can be used simultaneously.
+
+type: u64
+Maximum number of log entries.
+
+type: u64
+Maximum length of a single log, in bytes.
+
+
+type: Gas
+Base cost for calling a host function.
+
+type: Gas
+Base cost for guest memory read
+
+type: Gas
+Cost for guest memory read
+
+type: Gas
+Base cost for guest memory write
+
+type: Gas
+Cost for guest memory write per byte
+
+type: Gas
+Base cost for reading from register
+
+type: Gas
+Cost for reading byte from register
+
+type: Gas
+Base cost for writing into register
+
+type: Gas
+Cost for writing byte into register
+
+type: Gas
+Base cost of decoding utf8.
+
+type: Gas
+Cost per bye of decoding utf8.
+
+type: Gas
+Base cost of decoding utf16.
+
+type: Gas
+Cost per bye of decoding utf16.
+
+type: Gas
+Cost of getting sha256 base
+
+type: Gas
+Cost of getting sha256 per byte
+
+type: Gas
+Cost of getting keccak256 base
+
+type: Gas
+Cost of getting keccak256 per byte
+
+type: Gas
+Cost of getting keccak512 base
+
+type: Gas
+Cost of getting keccak512 per byte
+
+type: Gas
+Cost for calling logging.
+
+type: Gas
+Cost for logging per byte
+
+
+type: Gas
+Storage trie write key base cost
+
+type: Gas
+Storage trie write key per byte cost
+
+type: Gas
+Storage trie write value per byte cost
+
+type: Gas
+Storage trie write cost per byte of evicted value.
+
+type: Gas
+Storage trie read key base cost
+
+type: Gas
+Storage trie read key per byte cost
+
+type: Gas
+Storage trie read value cost per byte cost
+
+type: Gas
+Remove key from trie base cost
+
+type: Gas
+Remove key from trie per byte cost
+
+type: Gas
+Remove key from trie ret value byte cost
+
+type: Gas
+Storage trie check for key existence cost base
+
+type: Gas
+Storage trie check for key existence per key byte
+
+type: Gas
+Create trie prefix iterator cost base
+
+type: Gas
+Create trie prefix iterator cost per byte.
+
+type: Gas
+Create trie range iterator cost base
+
+type: Gas
+Create trie range iterator cost per byte of from key.
+
+type: Gas
+Create trie range iterator cost per byte of to key.
+
+type: Gas
+Trie iterator per key base cost
+
+type: Gas
+Trie iterator next key byte cost
+
+type: Gas
+Trie iterator next key byte cost
+
+type: Gas
+Cost per touched trie node
+
+
+type: Gas
+Cost for calling promise_and
+
+type: Gas
+Cost for calling promise_and for each promise
+
+type: Gas
+Cost for calling promise_return
+
+type: Enum
+Enum that describes one of the records in the state storage.
+
+type: Unnamed struct
+Record that contains account information for a given account ID.
+
+type: AccountId
+The account ID of the account.
+
+type: Account
+The account structure. Serialized to JSON. U128 types are serialized to strings.
+
+type: Unnamed struct
+Record that contains key-value data record for a contract at the given account ID.
+
+type: AccountId
+The account ID of the contract that contains this data record.
+
+type: Vec<u8>
+Data Key serialized in Base64 format.
+NOTE: Key doesn't contain the data separator.
+
+type: Vec<u8>
+Value serialized in Base64 format.
+
+type: Unnamed struct
+Record that contains a contract code for a given account ID.
+
+type: AccountId
+The account ID of that has the contract.
+
+type: Vec<u8>
+WASM Binary contract code serialized in Base64 format.
+
+type: Unnamed struct
+Record that contains an access key for a given account ID.
+
+type: AccountId
+The account ID of the access key owner.
+
+type: [PublicKey]
+The public key for the access key in JSON-friendly string format. E.g. ed25519:5JFfXMziKaotyFM1t4hfzuwh8GZMYCiKHfqw1gTEWMYT
+
+type: AccessKey
+The access key serialized in JSON format.
+
+type: Box<
Receipt>
+Record that contains a receipt that was postponed on a shard (e.g. it's waiting for incoming data).
+The receipt is in JSON-friendly format. The receipt can only be an ActionReceipt
.
+NOTE: Box is used to decrease fixed size of the entire enum.
+
+type: Unnamed struct
+Record that contains information about received data for some action receipt, that is not yet received or processed for a given account ID.
+The data is received using DataReceipt
before. See Receipts for details.
+
+type: AccountId
+The account ID of the receiver of the data.
+
+type: [CryptoHash]
+Data ID of the data in base58 format.
+
+type: Option<Vec<u8>>
+Optional data encoded as base64 format or null in JSON.
+
+type: Box<
Receipt>
+Record that contains a receipt that was delayed on a shard. It means the shard was overwhelmed with receipts and it processes receipts from backlog.
+The receipt is in JSON-friendly format. See Delayed Receipts for details.
+NOTE: Box is used to decrease fixed size of the entire enum.
+
+This is under heavy development
+
+Name | Value |
+yoctoNEAR | smallest undividable amount of native currency NEAR. |
+NEAR | 10**24 yoctoNEAR |
+block | smallest on-chain unit of time |
+gas | unit to measure usage of blockchain |
+
+
+Name | Value |
+INITIAL_SUPPLY | 10**33 yoctoNEAR |
+NEAR | 10**24 yoctoNEAR |
+MIN_GAS_PRICE | 10**5 yoctoNEAR |
+REWARD_PCT_PER_YEAR | 0.05 |
+BLOCK_TIME | 1 second |
+EPOCH_LENGTH | 43,200 blocks |
+EPOCHS_A_YEAR | 730 epochs |
+POKE_THRESHOLD | 500 blocks |
+INITIAL_MAX_STORAGE | 10 * 2**40 bytes == 10 TB |
+TREASURY_PCT | 0.1 |
+TREASURY_ACCOUNT_ID | treasury |
+CONTRACT_PCT | 0.3 |
+INVALID_STATE_SLASH_PCT | 0.05 |
+ADJ_FEE | 0.001 |
+TOTAL_SEATS | 100 |
+
+
+Name | Description | Initial value |
+totalSupply[t] | Total supply of NEAR at given epoch[t] | INITIAL_SUPPLY |
+gasPrice[t] | The cost of 1 unit of gas in NEAR tokens (see Transaction Fees section below) | MIN_GAS_PRICE |
+storageAmountPerByte[t] | keeping constant, INITIAL_SUPPLY / INITIAL_MAX_STORAGE | ~9.09 * 10**19 yoctoNEAR |
+
+
+The protocol sets a ceiling for the maximum issuance of tokens, and dynamically decreases this issuance depending on the amount of total fees in the system.
+Name | Description |
+reward[t] | totalSupply[t] * ((1 - REWARD_PCT_PER_YEAR ) ** (1/EPOCHS_A_YEAR ) - 1 ) |
+epochFee[t] | sum([(1 - DEVELOPER_PCT_PER_YEAR) * block.txFee + block.stateFee for block in epoch[t]]) |
+issuance[t] | The amount of token issued at a certain epoch[t], issuance[t] = reward[t] - epochFee[t] |
+
+Where totalSupply[t]
is the total number of tokens in the system at a given time t.
+If epochFee[t] > reward[t]
the issuance is negative, thus the totalSupply[t]
decreases in given epoch.
+
+Each transaction before inclusion must buy gas enough to cover the cost of bandwidth and execution.
+Gas unifies execution and bytes of bandwidth usage of blockchain. Each WASM instruction or pre-compiled function gets assigned an amount of gas based on measurements on common-denominator computer. Same goes for weighting the used bandwidth based on general unified costs. For specific gas mapping numbers see ???.
+Gas is priced dynamically in NEAR
tokens. At each block t
, we update gasPrice[t] = gasPrice[t - 1] * (gasUsed[t - 1] / gasLimit[t - 1] - 0.5) * ADJ_FEE
.
+Where gasUsed[t] = sum([sum([gas(tx) for tx in chunk]) for chunk in block[t]])
.
+gasLimit[t]
is defined as gasLimit[t] = gasLimit[t - 1] + validatorGasDiff[t - 1]
, where validatorGasDiff
is parameter with which each chunk producer can either increase or decrease gas limit based on how long it to execute the previous chunk. validatorGasDiff[t]
can be only within ±0.1%
of gasLimit[t]
and only if gasUsed[t - 1] > 0.9 * gasLimit[t - 1]
.
+
+Amount of NEAR
on the account represents right for this account to take portion of the blockchain's overall global state. Transactions fail if account doesn't have enough balance to cover the storage required for given account.
+def check_storage_cost(account):
+ # Compute requiredAmount given size of the account.
+ requiredAmount = sizeOf(account) * storageAmountPerByte
+ return Ok() if account.amount + account.locked < requiredAmount else Error(requiredAmount)
+
+# Check when transaction is received to verify that it is valid.
+def verify_transaction(tx, signer_account):
+ # ...
+ # Updates signer's account with the amount it will have after executing this tx.
+ update_post_amount(signer_account, tx)
+ result = check_storage_cost(signer_account)
+ # If enough balance OR account is been deleted by the owner.
+ if not result.ok() or DeleteAccount(tx.signer_id) in tx.actions:
+ assert LackBalanceForState(signer_id: tx.signer_id, amount: result.err())
+
+# After account touched / changed, we check it still has enough balance to cover it's storage.
+def on_account_change(block_height, account):
+ # ... execute transaction / receipt changes ...
+ # Validate post-condition and revert if it fails.
+ result = check_storage_cost(sender_account)
+ if not result.ok():
+ assert LackBalanceForState(signer_id: tx.signer_id, amount: result.err())
+
+Where sizeOf(account)
includes size of account_id
, account
structure and size of all the data stored under the account.
+Account can end up with not enough balance in case it gets slashed. Account will become unusable as all orginating transactions will fail (including deletion).
+The only way to recover it in this case is by sending extra funds from a different accounts.
+
+NEAR validators provide their resources in exchange for a reward epochReward[t]
, where [t] represents the considered epoch
+Name | Description |
+epochReward[t] | = coinbaseReward[t] + epochFee[t] |
+coinbaseReward[t] | The maximum inflation per epoch[t], as a function of REWARD_PCT_PER_YEAR / EPOCHS_A_YEAR |
+
+
+Name | Description |
+proposals | The array of all existing validators, minus the ones which were online less than ONLINE_THRESHOLD , plus new validators |
+INCLUSION_FEE | The arbitrary transaction fee that new validators offer to be included in the proposals , to mitigate censorship risks by existing validators |
+ONLINE_THRESHOLD | 0.9 |
+epoch[T] | The epoch when validator[v] is selected from the proposals auction array |
+seatPrice | The minimum stake needed to become validator in epoch[T] |
+stake[v] | The amount in NEAR tokens staked by validator[v] during the auction at the end of epoch[T-2], minus INCLUSION_FEE |
+shard[v] | The shard is randomly assigned to validator[v] at epoch[T-1], such that its node can download and sync with its state |
+numSeats | Number of seats assigned to validator[v], calculated from stake[v]/seatPrice |
+validatorAssignments | The resulting ordered array of all proposals with a stake higher than seatPrice |
+
+validatorAssignments
is then split in two groups: block/chunk producers and hidden validators.
+
+Name | Value |
+epochFee[t] | sum([(1 - DEVELOPER_PCT_PER_YEAR) * txFee[i] + stateFee[i] for i in epoch[t]]) , where [i] represents any considered block within the epoch[t] |
+
+Total reward every epoch t
is equal to:
+reward[t] = totalSupply * ((1 + REWARD_PCT_PER_YEAR) ** (1 / EPOCHS_A_YEAR) - 1)
+
+Uptime of a specific validator is computed:
+pct_online[t][j] = (num_produced_blocks[t][j] / expected_produced_blocks[t][j] + num_produced_chunks[t][j] / expected_produced_chunks[t][j]) / 2
+if pct_online > ONLINE_THRESHOLD:
+ uptime[t][j] = (pct_online[t][j] - ONLINE_THRESHOLD) / (1 - ONLINE_THRESHOLD)
+else:
+ uptime[t][j] = 0
+
+Where expected_produced_blocks
and expected_produced_chunks
is the number of blocks and chunks respectively that is expected to be produced by given validator j
in the epoch t
.
+The specific validator[t][j]
reward for epoch t
is then proportional to the fraction of stake of this validator from total stake:
+validator[t][j] = (uptime[t][j] * stake[t][j] * reward[t]) / total_stake[t]
+
+
+
+# Check that chunk is invalid, because the proofs in header don't match the body.
+def chunk_proofs_condition(chunk):
+ # TODO
+
+# At the end of the epoch, run update validators and
+# determine how much to slash validators.
+def end_of_epoch_update_validators(validators):
+ # ...
+ for validator in validators:
+ if validator.is_slashed:
+ validator.stake -= INVALID_STATE_SLASH_PCT * validator.stake
+
+
+# Check that chunk header post state root is invalid,
+# because the execution of previous chunk doesn't lead to it.
+def chunk_state_condition(prev_chunk, prev_state, chunk_header):
+ # TODO
+
+# At the end of the epoch, run update validators and
+# determine how much to slash validators.
+def end_of_epoch(..., validators):
+ # ...
+ for validator in validators:
+ if validator.is_slashed:
+ validator.stake -= INVALID_STATE_SLASH_PCT * validator.stake
+
+
+Treasury account TREASURY_ACCOUNT_ID
receives fraction of reward every epoch t
:
+# At the end of the epoch, update treasury
+def end_of_epoch(..., reward):
+ # ...
+ accounts[TREASURY_ACCOUNT_ID].amount = TREASURY_PCT * reward
+
+
+
+
+
+A standard interface for fungible tokens allowing for ownership, escrow and transfer, specifically targeting third-party marketplace integration.
+
+NEAR Protocol uses an asynchronous sharded Runtime. This means the following:
+
+- Storage for different contracts and accounts can be located on the different shards.
+- Two contracts can be executed at the same time in different shards.
+
+While this increases the transaction throughput linearly with the number of shards, it also creates some challenges for cross-contract development.
+For example, if one contract wants to query some information from the state of another contract (e.g. current balance), by the time the first contract receive the balance the real balance can change.
+It means in the async system, a contract can't rely on the state of other contract and assume it's not going to change.
+Instead the contract can rely on temporary partial lock of the state with a callback to act or unlock, but it requires careful engineering to avoid dead locks.
+In this standard we're trying to avoid enforcing locks, since most actions can still be completed without locks by transferring ownership to an escrow account.
+Prior art:
+
+
+We should be able to do the following:
+
+- Initialize contract once. The given total supply will be owned by the given account ID.
+- Get the total supply.
+- Transfer tokens to a new user.
+- Set a given allowance for an escrow account ID.
+
+- Escrow will be able to transfer up this allowance from your account.
+- Get current balance for a given account ID.
+
+
+- Transfer tokens from one user to another.
+- Get the current allowance for an escrow account on behalf of the balance owner. This should only be used in the UI, since a contract shouldn't rely on this temporary information.
+
+There are a few concepts in the scenarios above:
+
+- Total supply. It's the total number of tokens in circulation.
+- Balance owner. An account ID that owns some amount of tokens.
+- Balance. Some amount of tokens.
+- Transfer. Action that moves some amount from one account to another account.
+- Escrow. A different account from the balance owner who has permission to use some amount of tokens.
+- Allowance. The amount of tokens an escrow account can use on behalf of the account owner.
+
+Note, that the precision is not part of the default standard, since it's not required to perform actions. The minimum
+value is always 1 token.
+
+Alice wants to send 5 wBTC tokens to Bob.
+Assumptions
+
+- The wBTC token contract is
wbtc
.
+- Alice's account is
alice
.
+- Bob's account is
bob
.
+- The precision on wBTC contract is
10^8
.
+- The 5 tokens is
5 * 10^8
or as a number is 500000000
.
+
+High-level explanation
+Alice needs to issue one transaction to wBTC contract to transfer 5 tokens (multiplied by precision) to Bob.
+Technical calls
+
+alice
calls wbtc::transfer({"new_owner_id": "bob", "amount": "500000000"})
.
+
+
+Alice wants to deposit 1000 DAI tokens to a compound interest contract to earn extra tokens.
+Assumptions
+
+- The DAI token contract is
dai
.
+- Alice's account is
alice
.
+- The compound interest contract is
compound
.
+- The precision on DAI contract is
10^18
.
+- The 1000 tokens is
1000 * 10^18
or as a number is 1000000000000000000000
.
+- The compound contract can work with multiple token types.
+
+High-level explanation
+Alice needs to issue 2 transactions. The first one to dai
to set an allowance for compound
to be able to withdraw tokens from alice
.
+The second transaction is to the compound
to start the deposit process. Compound will check that the DAI tokens are supported and will try to withdraw the desired amount of DAI from alice
.
+
+- If transfer succeeded,
compound
can increase local ownership for alice
to 1000 DAI
+- If transfer fails,
compound
doesn't need to do anything in current example, but maybe can notify alice
of unsuccessful transfer.
+
+Technical calls
+
+alice
calls dai::set_allowance({"escrow_account_id": "compound", "allowance": "1000000000000000000000"})
.
+alice
calls compound::deposit({"token_contract": "dai", "amount": "1000000000000000000000"})
. During the deposit
call, compound
does the following:
+
+- makes async call
dai::transfer_from({"owner_id": "alice", "new_owner_id": "compound", "amount": "1000000000000000000000"})
.
+- attaches a callback
compound::on_transfer({"owner_id": "alice", "token_contract": "dai", "amount": "1000000000000000000000"})
.
+
+
+
+
+Charlie wants to exchange his wLTC to wBTC on decentralized exchange contract. Alex wants to buy wLTC and has 80 wBTC.
+Assumptions
+
+- The wLTC token contract is
wltc
.
+- The wBTC token contract is
wbtc
.
+- The DEX contract is
dex
.
+- Charlie's account is
charlie
.
+- Alex's account is
alex
.
+- The precision on both tokens contract is
10^8
.
+- The amount of 9001 wLTC tokens is Alex wants is
9001 * 10^8
or as a number is 900100000000
.
+- The 80 wBTC tokens is
80 * 10^8
or as a number is 8000000000
.
+- Charlie has 1000000 wLTC tokens which is
1000000 * 10^8
or as a number is 100000000000000
+- Dex contract already has an open order to sell 80 wBTC tokens by
alex
towards 9001 wLTC.
+- Without Safes implementation, DEX has to act as an escrow and hold funds of both users before it can do an exchange.
+
+High-level explanation
+Let's first setup open order by Alex on DEX. It's similar to Token deposit to a contract
example above.
+
+- Alex sets an allowance on wBTC to DEX
+- Alex calls deposit on Dex for wBTC.
+- Alex calls DEX to make an new sell order.
+
+Then Charlie comes and decides to fulfill the order by selling his wLTC to Alex on DEX.
+Charlie calls the DEX
+
+- Charlie sets the allowance on wLTC to DEX
+- Alex calls deposit on Dex for wLTC.
+- Then calls DEX to take the order from Alex.
+
+When called, DEX makes 2 async transfers calls to exchange corresponding tokens.
+
+- DEX calls wLTC to transfer tokens DEX to Alex.
+- DEX calls wBTC to transfer tokens DEX to Charlie.
+
+Technical calls
+
+alex
calls wbtc::set_allowance({"escrow_account_id": "dex", "allowance": "8000000000"})
.
+alex
calls dex::deposit({"token": "wbtc", "amount": "8000000000"})
.
+
+dex
calls wbtc::transfer_from({"owner_id": "alex", "new_owner_id": "dex", "amount": "8000000000"})
+
+
+alex
calls dex::trade({"have": "wbtc", "have_amount": "8000000000", "want": "wltc", "want_amount": "900100000000"})
.
+charlie
calls wltc::set_allowance({"escrow_account_id": "dex", "allowance": "100000000000000"})
.
+charlie
calls dex::deposit({"token": "wltc", "amount": "100000000000000"})
.
+
+dex
calls wltc::transfer_from({"owner_id": "charlie", "new_owner_id": "dex", "amount": "100000000000000"})
+
+
+charlie
calls dex::trade({"have": "wltc", "have_amount": "900100000000", "want": "wbtc", "want_amount": "8000000000"})
.
+
+dex
calls wbtc::transfer({"new_owner_id": "charlie", "amount": "8000000000"})
+dex
calls wltc::transfer({"new_owner_id": "alex", "amount": "900100000000"})
+
+
+
+
+The full implementation in Rust can be found there: https://github.com/nearprotocol/near-sdk-rs/blob/master/examples/fungible-token/src/lib.rs
+NOTES:
+
+- All amounts, balances and allowance are limited by U128 (max value
2**128 - 1
).
+- Token standard uses JSON for serialization of arguments and results.
+- Amounts in arguments and results have are serialized as Base-10 strings, e.g.
"100"
. This is done to avoid
+JSON limitation of max integer value of 2**53
.
+
+Interface:
+
+#![allow(unused_variables)]
+fn main() {
+/******************/
+/* CHANGE METHODS */
+/******************/
+
+/// Sets the `allowance` for `escrow_account_id` on the account of the caller of this contract
+/// (`predecessor_id`) who is the balance owner.
+pub fn set_allowance(&mut self, escrow_account_id: AccountId, allowance: U128);
+
+/// Transfers the `amount` of tokens from `owner_id` to the `new_owner_id`.
+/// Requirements:
+/// * `amount` should be a positive integer.
+/// * `owner_id` should have balance on the account greater or equal than the transfer `amount`.
+/// * If this function is called by an escrow account (`owner_id != predecessor_account_id`),
+/// then the allowance of the caller of the function (`predecessor_account_id`) on
+/// the account of `owner_id` should be greater or equal than the transfer `amount`.
+pub fn transfer_from(&mut self, owner_id: AccountId, new_owner_id: AccountId, amount: U128);
+
+
+/// Transfer `amount` of tokens from the caller of the contract (`predecessor_id`) to
+/// `new_owner_id`.
+/// Act the same was as `transfer_from` with `owner_id` equal to the caller of the contract
+/// (`predecessor_id`).
+pub fn transfer(&mut self, new_owner_id: AccountId, amount: U128);
+
+/****************/
+/* VIEW METHODS */
+/****************/
+
+/// Returns total supply of tokens.
+pub fn get_total_supply(&self) -> U128;
+
+/// Returns balance of the `owner_id` account.
+pub fn get_balance(&self, owner_id: AccountId) -> U128;
+
+/// Returns current allowance of `escrow_account_id` for the account of `owner_id`.
+///
+/// NOTE: Other contracts should not rely on this information, because by the moment a contract
+/// receives this information, the allowance may already be changed by the owner.
+/// So this method should only be used on the front-end to see the current allowance.
+pub fn get_allowance(&self, owner_id: AccountId, escrow_account_id: AccountId) -> U128;
+}
+
+
+
+- Current interface doesn't have minting, precision (decimals), naming. But it should be done as extensions, e.g. a Precision extension.
+- It's not possible to exchange tokens without transferring them to escrow first.
+- It's not possible to transfer tokens to a contract with a single transaction without setting the allowance first.
+It should be possible if we introduce
transfer_with
function that transfers tokens and calls escrow contract. It needs to handle result of the execution and contracts have to be aware of this API.
+
+
+
+- Support for multiple token types
+- Minting and burning
+- Precision, naming and short token name.
+
+
+